Anki Deck Changes

Commit: 347d0a09 - bobby gönn

Author: lhorva <lhorva@student.ethz.ch>

Date: 2026-01-23T02:14:19+01:00

Changes: 21 note(s) changed (0 added, 21 modified, 0 deleted)

ℹ️ Cosmetic Changes Hidden: 4 note(s) had formatting-only changes and are not shown below • 4 mixed cosmetic changes

Note 1: ETH::A&D

Deck: ETH::A&D
Note Type: Horvath Cloze
GUID: hv)-{h!@?x
modified

Before

Front

ETH::1._Semester::A&D::11._Minimum_Spanning_Trees::3._Kruskal's_Algorithm::1._Union_Find
The amortised runtime of union in the Union-Find DS is  \(O(|V| \log |V|)\).

Back

ETH::1._Semester::A&D::11._Minimum_Spanning_Trees::3._Kruskal's_Algorithm::1._Union_Find
The amortised runtime of union in the Union-Find DS is  \(O(|V| \log |V|)\).

union takes \(\Theta(\min \{ |ZHK(u)| , |ZHK(v)| \}\). In the worst case, the minimum is \(|V| / 2\) as both have the same size.

Therefore over all loops, this would take \(O(|V| \log |V|)\) time, as on average we only take \(O(\log |V|)\) time.
The graph stays worst case, this is the average of the calls in the worst case.

After

Front

ETH::1._Semester::A&D::11._Minimum_Spanning_Trees::3._Kruskal's_Algorithm::1._Union_Find
The amortised runtime of union in the Union-Find datastructure is  \(O(|V| \log |V|)\).

Back

ETH::1._Semester::A&D::11._Minimum_Spanning_Trees::3._Kruskal's_Algorithm::1._Union_Find
The amortised runtime of union in the Union-Find datastructure is  \(O(|V| \log |V|)\).

Union takes \(\Theta(\min \{ |ZHK(u)| , |ZHK(v)| \}\). In the worst case, the minimum is \(|V| / 2\) as both have the same size.

Therefore over all loops, this would take \(O(|V| \log |V|)\) time, as on average we only take \(O(\log |V|)\) time.
The graph stays worst case, this is the average of the calls in the worst case.
Field-by-field Comparison
Field Before After
Text The amortised runtime of&nbsp;<b>union</b>&nbsp;in the Union-Find DS is {{c1::&nbsp;\(O(|V| \log |V|)\)}}. The amortised runtime of&nbsp;<b>union</b>&nbsp;in the Union-Find datastructure is {{c1::&nbsp;\(O(|V| \log |V|)\)}}.
Extra union takes \(\Theta(\min \{ |ZHK(u)| , |ZHK(v)| \}\). In the worst case, the minimum is \(|V| / 2\)&nbsp;as both have the same size.<br><br>Therefore over all loops, this would take \(O(|V| \log |V|)\)&nbsp;time, as&nbsp;<i>on average</i>&nbsp;we only take&nbsp;\(O(\log |V|)\)&nbsp;time.<br><i>The graph stays worst case, this is the average of the calls in the worst case.</i> Union takes \(\Theta(\min \{ |ZHK(u)| , |ZHK(v)| \}\). In the worst case, the minimum is \(|V| / 2\)&nbsp;as both have the same size.<br><br>Therefore over all loops, this would take \(O(|V| \log |V|)\)&nbsp;time, as&nbsp;<i>on average</i>&nbsp;we only take&nbsp;\(O(\log |V|)\)&nbsp;time.<br><i>The graph stays worst case, this is the average of the calls in the worst case.</i>
Tags: ETH::1._Semester::A&D::11._Minimum_Spanning_Trees::3._Kruskal's_Algorithm::1._Union_Find

Note 2: ETH::A&D

Deck: ETH::A&D
Note Type: Horvath Cloze
GUID: |%{-v*KE>
modified

Before

Front

ETH::1._Semester::A&D::07._Graphs::1._Introduction_to_Graphs
The standard notation for \(|V|\) is  \(n\) and for \(|E|\) is \(m\).

Back

ETH::1._Semester::A&D::07._Graphs::1._Introduction_to_Graphs
The standard notation for \(|V|\) is  \(n\) and for \(|E|\) is \(m\).

After

Front

ETH::1._Semester::A&D::07._Graphs::1._Introduction_to_Graphs
The standard notation for \(|V|\) is \(n\) and for \(|E|\) is \(m\).

Back

ETH::1._Semester::A&D::07._Graphs::1._Introduction_to_Graphs
The standard notation for \(|V|\) is \(n\) and for \(|E|\) is \(m\).
Field-by-field Comparison
Field Before After
Text The standard notation for&nbsp;\(|V|\)&nbsp;is {{c1::&nbsp;\(n\)}} and for&nbsp;\(|E|\)&nbsp;is {{c1:: \(m\)}}. The standard notation for&nbsp;\(|V|\)&nbsp;is {{c1::\(n\)}} and for&nbsp;\(|E|\)&nbsp;is {{c1:: \(m\)}}.
Tags: ETH::1._Semester::A&D::07._Graphs::1._Introduction_to_Graphs

Note 3: ETH::DiskMat

Deck: ETH::DiskMat
Note Type: Horvath Cloze
GUID: Oj3Xy8Rn2M
modified

Before

Front

ETH::1._Semester::DiskMat::6._Logic::6._Predicate_Logic_(First-order_Logic)::7._Normal_Forms
Skolem normal form has no existance quantifiers.
It is equisatisfiable (not equivalent!) to the original formula.

Back

ETH::1._Semester::DiskMat::6._Logic::6._Predicate_Logic_(First-order_Logic)::7._Normal_Forms
Skolem normal form has no existance quantifiers.
It is equisatisfiable (not equivalent!) to the original formula.

After

Front

ETH::1._Semester::DiskMat::6._Logic::6._Predicate_Logic_(First-order_Logic)::7._Normal_Forms
Skolem normal form has no existence quantifiers.
It is equisatisfiable (not equivalent!) to the original formula.

Back

ETH::1._Semester::DiskMat::6._Logic::6._Predicate_Logic_(First-order_Logic)::7._Normal_Forms
Skolem normal form has no existence quantifiers.
It is equisatisfiable (not equivalent!) to the original formula.
Field-by-field Comparison
Field Before After
Text Skolem normal form has {{c1::no existance quantifiers}}.<br>It is {{c2::<i>equisatisfiable</i> (not equivalent!)}} to the original formula. Skolem normal form has {{c1::no existence quantifiers}}.<br>It is {{c2::<i>equisatisfiable</i> (not equivalent!)}} to the original formula.
Tags: ETH::1._Semester::DiskMat::6._Logic::6._Predicate_Logic_(First-order_Logic)::7._Normal_Forms

Note 4: ETH::DiskMat

Deck: ETH::DiskMat
Note Type: Horvath Cloze
GUID: Tn2Bx6Km4H
modified

Before

Front

ETH::1._Semester::DiskMat::6._Logic::3._Elementary_General_Concepts_in_Logic::6._The_Logical_Operators_∧,_∨,_and_¬
\(F \land F\)  \(\equiv\)  \( F\) and \(F \lor F\)  \(\equiv\)  \( F\).

Back

ETH::1._Semester::DiskMat::6._Logic::3._Elementary_General_Concepts_in_Logic::6._The_Logical_Operators_∧,_∨,_and_¬
\(F \land F\)  \(\equiv\)  \( F\) and \(F \lor F\)  \(\equiv\)  \( F\).

(idempotence)

After

Front

ETH::1._Semester::DiskMat::6._Logic::3._Elementary_General_Concepts_in_Logic::6._The_Logical_Operators_∧,_∨,_and_¬
\(F \land F\)  \(\equiv\)  \( F\) and \(F \lor F\)  \(\equiv\)  \( F\).

Back

ETH::1._Semester::DiskMat::6._Logic::3._Elementary_General_Concepts_in_Logic::6._The_Logical_Operators_∧,_∨,_and_¬
\(F \land F\)  \(\equiv\)  \( F\) and \(F \lor F\)  \(\equiv\)  \( F\).

(idempotence)
Field-by-field Comparison
Field Before After
Text {{c1::\(F \land F\)&nbsp;:: <i>idempotence</i>}}&nbsp;\(\equiv\)&nbsp;{{c2::&nbsp;\( F\)}}&nbsp;and {{c1::\(F \lor F\)&nbsp;:: <i>idempotence</i>}}&nbsp;\(\equiv\)&nbsp;{{c2::&nbsp;\( F\)}}. {{c1::\(F \land F\)&nbsp;::<i>idempotence</i>}}&nbsp;\(\equiv\)&nbsp;{{c2::&nbsp;\( F\)}}&nbsp;and {{c1::\(F \lor F\)&nbsp;::<i>idempotence</i>}}&nbsp;\(\equiv\)&nbsp;{{c2::&nbsp;\( F\)}}.
Tags: ETH::1._Semester::DiskMat::6._Logic::3._Elementary_General_Concepts_in_Logic::6._The_Logical_Operators_∧,_∨,_and_¬

Note 5: ETH::EProg

Deck: ETH::EProg
Note Type: Horvath Cloze
GUID: f2vR,E9IiI
modified

Before

Front

ETH::1._Semester::EProg::2._First_Java_Programs::3._Simple_Calculations::Precedence_Associativity
Unary operators bind stronger than binary ones

Back

ETH::1._Semester::EProg::2._First_Java_Programs::3._Simple_Calculations::Precedence_Associativity
Unary operators bind stronger than binary ones

After

Front

ETH::1._Semester::EProg::2._First_Java_Programs::3._Simple_Calculations::Precedence_Associativity
Unary operators bind stronger than binary ones.

Back

ETH::1._Semester::EProg::2._First_Java_Programs::3._Simple_Calculations::Precedence_Associativity
Unary operators bind stronger than binary ones.
Field-by-field Comparison
Field Before After
Text Unary operators bind {{c1:: stronger}} than {{c2:: binary ones}} Unary operators bind {{c1:: stronger}} than {{c2:: binary ones}}.
Tags: ETH::1._Semester::EProg::2._First_Java_Programs::3._Simple_Calculations::Precedence_Associativity

Note 6: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID: B~w={<:u.n
modified

Before

Front

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda B) = \lambda^n \det(B) \).

Back

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda B) = \lambda^n \det(B) \).

Each row is scaled by \(\lambda\) and by multi-linearity we have to take it out of each one (n times)

After

Front

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda A) = \lambda^n \det(A) \).

Back

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda A) = \lambda^n \det(A) \).

Each row is scaled by \(\lambda\) and by multi-linearity we have to take it out of each one (n times).
Field-by-field Comparison
Field Before After
Text For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda B) = {{c1:: \lambda^n \det(B) }}\). For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda A) = {{c1:: \lambda^n \det(A) }}\).
Extra Each row is scaled by&nbsp;\(\lambda\)&nbsp;and by multi-linearity we have to take it out of each one (n times) Each row is scaled by&nbsp;\(\lambda\)&nbsp;and by multi-linearity we have to take it out of each one (n times).
Tags: ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties

Note 7: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID: C0VH)T^.1n
modified

Before

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
Using SVD we can decompose every matrix \(A \in \mathbb{R}^{n \times m}\) into \(A =\) \(U \Sigma V^\top\).

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
Using SVD we can decompose every matrix \(A \in \mathbb{R}^{n \times m}\) into \(A =\) \(U \Sigma V^\top\).

After

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
Using SVD we can decompose any matrix \(A \in \mathbb{R}^{n \times m}\) into \(A =\) \(U \Sigma V^\top\).

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
Using SVD we can decompose any matrix \(A \in \mathbb{R}^{n \times m}\) into \(A =\) \(U \Sigma V^\top\).
Field-by-field Comparison
Field Before After
Text Using SVD we can decompose {{c1::every}} matrix&nbsp;\(A \in \mathbb{R}^{n \times m}\)&nbsp;into&nbsp;\(A =\)&nbsp;{{c2::\(U \Sigma V^\top\)}}. Using SVD we can decompose {{c1::any}} matrix&nbsp;\(A \in \mathbb{R}^{n \times m}\)&nbsp;into&nbsp;\(A =\)&nbsp;{{c2::\(U \Sigma V^\top\)}}.
Tags: ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD

Note 8: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID: Fdf#%+wdU#
modified

Before

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called the singular values of \(A\) and are ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\\)}}.

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called the singular values of \(A\) and are ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\\)}}.

After

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called the singular values of \(A\) and are {{c1:: ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\} }\)}}.

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called the singular values of \(A\) and are {{c1:: ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\} }\)}}.
Field-by-field Comparison
Field Before After
Text In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called {{c1::the singular values}} of \(A\) and are {{c1:: ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\}}\)}}. In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called {{c1::the singular values}} of \(A\) and are {{c1:: ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\} }\)}}.
Tags: ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD

Note 9: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Classic
GUID: G(7.sQ=i_?
modified

Before

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::2._Symmetric_Matrices_and_the_Spectral_Theorem::1._Rayleigh_Quotient
Proof that the Rayleigh Quotient has it's maximum and minimum at the largest/smallest EWs?

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::2._Symmetric_Matrices_and_the_Spectral_Theorem::1._Rayleigh_Quotient
Proof that the Rayleigh Quotient has it's maximum and minimum at the largest/smallest EWs?


Proof: It is easy to see that \(R(v_{\max}) = \lambda_{\max}\) and \(R(v_{\min}) = \lambda_{\min}\). See \(R(v_{\text{max}}) = \frac{v_{\text{max}}^\top A v_{\text{max}}}{v_{\text{max}}^\top v_{\text{max}}} = \frac{v_{\text{max}}^\top (\lambda_{\text{max}} v_{\text{max}})}{v_{\text{max}}^\top v_{\text{max}}} = \lambda_{\text{max}}\).

After

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::2._Symmetric_Matrices_and_the_Spectral_Theorem::1._Rayleigh_Quotient
Proof that the Rayleigh Quotient has it's maximum and minimum at the largest/smallest EWs?

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::2._Symmetric_Matrices_and_the_Spectral_Theorem::1._Rayleigh_Quotient
Proof that the Rayleigh Quotient has it's maximum and minimum at the largest/smallest EWs?

It is easy to see that \(R(v_{\max}) = \lambda_{\max}\) and \(R(v_{\min}) = \lambda_{\min}\). 

See: 
\(R(v_{\text{max}}) = \frac{v_{\text{max}}^\top A v_{\text{max}}}{v_{\text{max}}^\top v_{\text{max}}} = \frac{v_{\text{max}}^\top (\lambda_{\text{max}} v_{\text{max}})}{v_{\text{max}}^\top v_{\text{max}}} = \lambda_{\text{max}}\)
Field-by-field Comparison
Field Before After
Back <br><div><b>Proof:</b>&nbsp;It is easy to see that \(R(v_{\max}) = \lambda_{\max}\) and \(R(v_{\min}) = \lambda_{\min}\). See \(R(v_{\text{max}}) = \frac{v_{\text{max}}^\top A v_{\text{max}}}{v_{\text{max}}^\top v_{\text{max}}} = \frac{v_{\text{max}}^\top (\lambda_{\text{max}} v_{\text{max}})}{v_{\text{max}}^\top v_{\text{max}}} = \lambda_{\text{max}}\).</div> <div>It is easy to see that \(R(v_{\max}) = \lambda_{\max}\) and \(R(v_{\min}) = \lambda_{\min}\).&nbsp;</div><div><br></div><div>See:&nbsp;</div><div>\(R(v_{\text{max}}) = \frac{v_{\text{max}}^\top A v_{\text{max}}}{v_{\text{max}}^\top v_{\text{max}}} = \frac{v_{\text{max}}^\top (\lambda_{\text{max}} v_{\text{max}})}{v_{\text{max}}^\top v_{\text{max}}} = \lambda_{\text{max}}\)</div>
Tags: ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::2._Symmetric_Matrices_and_the_Spectral_Theorem::1._Rayleigh_Quotient

Note 10: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID: K=a-HUOVwC
modified

Before

Front

ETH::1._Semester::LinAlg::2._Matrices::4._Invertible_and_Inverse_matrices::1._Undoing_matrix_transformations
Three equivalent statements:
  1. {{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.}}
  2. There is an \(m \times m\) matrix \(B\) such that \(BA = I\).
  3. The columns of \(A\) are linearly independent.

Back

ETH::1._Semester::LinAlg::2._Matrices::4._Invertible_and_Inverse_matrices::1._Undoing_matrix_transformations
Three equivalent statements:
  1. {{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.}}
  2. There is an \(m \times m\) matrix \(B\) such that \(BA = I\).
  3. The columns of \(A\) are linearly independent.

The third one can be derived from the fact that if \(BA = I\), there  is only a single \(x \in \mathbb{R}^m\) such that \(A \textbf{x} = 0\).

It is also intuitively clear that if not all columns were linearly independent, we'd actually have a tall linear transformation and would be losing information.

After

Front

ETH::1._Semester::LinAlg::2._Matrices::4._Invertible_and_Inverse_matrices::1._Undoing_matrix_transformations
Three equivalent statements:
  1. {{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.::Transformation}}
  2. There is an \(m \times m\) matrix \(B\) such that \(BA = I\).
  3. The columns of \(A\) are linearly independent.

Back

ETH::1._Semester::LinAlg::2._Matrices::4._Invertible_and_Inverse_matrices::1._Undoing_matrix_transformations
Three equivalent statements:
  1. {{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.::Transformation}}
  2. There is an \(m \times m\) matrix \(B\) such that \(BA = I\).
  3. The columns of \(A\) are linearly independent.

The third one can be derived from the fact that if \(BA = I\), there  is only a single \(x \in \mathbb{R}^m\) such that \(A \textbf{x} = 0\).

It is also intuitively clear that if not all columns were linearly independent, we'd actually have a tall linear transformation and would be losing information.
Field-by-field Comparison
Field Before After
Text Three equivalent statements:<br><ol><li>{{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.}}</li><li>{{c2::There is an \(m \times m\) matrix&nbsp;\(B\)&nbsp;such that \(BA = I\).}}</li><li>{{c3::The columns of \(A\) are linearly independent.}}</li></ol> Three equivalent statements:<br><ol><li>{{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.::Transformation}}</li><li>{{c2::There is an \(m \times m\) matrix&nbsp;\(B\)&nbsp;such that \(BA = I\).}}</li><li>{{c3::The columns of \(A\) are linearly independent.}}</li></ol>
Tags: ETH::1._Semester::LinAlg::2._Matrices::4._Invertible_and_Inverse_matrices::1._Undoing_matrix_transformations

Note 11: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID: My,;4A;?fH
modified

Before

Front

ETH::1._Semester::LinAlg::6._Applications_of_orthogonality_and_projections::1._Least_Squares_Approximation
If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Square easier.

We can convert any \(A\) to have orthogonal columns by making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis.

Back

ETH::1._Semester::LinAlg::6._Applications_of_orthogonality_and_projections::1._Least_Squares_Approximation
If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Square easier.

We can convert any \(A\) to have orthogonal columns by making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis.

After

Front

ETH::1._Semester::LinAlg::6._Applications_of_orthogonality_and_projections::1._Least_Squares_Approximation
If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Squares easier.

We can convert any \(A\) to have orthogonal columns by making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis.

Back

ETH::1._Semester::LinAlg::6._Applications_of_orthogonality_and_projections::1._Least_Squares_Approximation
If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Squares easier.

We can convert any \(A\) to have orthogonal columns by making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis.
Field-by-field Comparison
Field Before After
Text <div>If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Square easier.</div><div><br></div><div>We can convert any \(A\) to have orthogonal columns by {{c1:: making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis}}.</div> <div>If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Squares easier.</div><div><br></div><div>We can convert any \(A\) to have orthogonal columns by {{c1:: making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis}}.</div>
Tags: ETH::1._Semester::LinAlg::6._Applications_of_orthogonality_and_projections::1._Least_Squares_Approximation

Note 12: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID: Oow<}IKdC,
modified

Before

Front

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
Given a matrix \(A \in \mathbb{R}^{n \times n}\), then \[ \det(A) = \det(A^\top) \]

Back

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
Given a matrix \(A \in \mathbb{R}^{n \times n}\), then \[ \det(A) = \det(A^\top) \]

This follows from the fact that the inverse of a permutation has the same sign, and transposing is the same as doing the inverse permutation.

After

Front

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
Given a matrix \(A \in \mathbb{R}^{n \times n}\), then:
 \[ \det(A) = \det(A^\top) \]

Back

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
Given a matrix \(A \in \mathbb{R}^{n \times n}\), then:
 \[ \det(A) = \det(A^\top) \]

This follows from the fact that the inverse of a permutation has the same sign, and transposing is the same as doing the inverse permutation.
Field-by-field Comparison
Field Before After
Text Given a matrix \(A \in \mathbb{R}^{n \times n}\), then \[ {{c1::\det(A)}} = \det(A^\top) \] Given a matrix \(A \in \mathbb{R}^{n \times n}\), then:<br>&nbsp;\[ {{c1::\det(A)}} = \det(A^\top) \]
Tags: ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties

Note 13: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID: dZ)aTr>2eb
modified

Before

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::1._Diagonalisation
Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the non-zero eigenvalues of \(A^\top A\) are the same ones of \(AA^\top\)Proof Included

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::1._Diagonalisation
Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the non-zero eigenvalues of \(A^\top A\) are the same ones of \(AA^\top\)Proof Included

Shared EWs: For \((A^\top A)v_k = \lambda_k v_k\) we get \(AA^\top A v_k = \lambda_k Av_k\) and thus \(Av_k\) EV and \(\lambda_k\) is an EW of \(AA^\top\).

Orthogonality: For \(j \neq k\) we have \((Av_j)^\top (Av_k) = v_j^\top A^\top Av_k = v_j^\top \lambda_k v_k = \lambda_k v_j^\top v_k = 0\)

After

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::1._Diagonalisation
Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the non-zero eigenvalues of \(A^\top A\) are the same ones as of \(AA^\top\)Proof Included

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::1._Diagonalisation
Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the non-zero eigenvalues of \(A^\top A\) are the same ones as of \(AA^\top\)Proof Included

Shared EWs: For \((A^\top A)v_k = \lambda_k v_k\) we get \(AA^\top A v_k = \lambda_k Av_k\) and thus \(Av_k\) EV and \(\lambda_k\) is an EW of \(AA^\top\).

Orthogonality: For \(j \neq k\) we have \((Av_j)^\top (Av_k) = v_j^\top A^\top Av_k = v_j^\top \lambda_k v_k = \lambda_k v_j^\top v_k = 0\)
Field-by-field Comparison
Field Before After
Text <div>Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the {{c1::non-zero eigenvalues}} of {{c2::\(A^\top A\) are the same ones of \(AA^\top\)}}.&nbsp;<i>Proof Included</i></div> <div>Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the {{c1::non-zero eigenvalues}} of {{c2::\(A^\top A\)}} are the same ones as of {{c2::\(AA^\top\)}}.&nbsp;<i>Proof Included</i></div>
Tags: ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::1._Diagonalisation

Note 14: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID: hy_&_By5dp
modified

Before

Front

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
  1. \(\det(A) = \det(A^T)\)
  2. \(\det(I) = 1\)
  3. \(\det(A) = 0\) if linearly dependent columns.
  4. Exchanging two rows flips the sign of the determinant.
  5. Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)

Back

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
  1. \(\det(A) = \det(A^T)\)
  2. \(\det(I) = 1\)
  3. \(\det(A) = 0\) if linearly dependent columns.
  4. Exchanging two rows flips the sign of the determinant.
  5. Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)

After

Front

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
  1. \(\det(A) = \det(A^T)\)
  2. \(\det(I) = 1\)
  3. \(\det(A) = 0\) if linearly dependent columns.
  4. Exchanging two rows flips the sign of the determinant.
  5. Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)

Back

ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties
  1. \(\det(A) = \det(A^T)\)
  2. \(\det(I) = 1\)
  3. \(\det(A) = 0\) if linearly dependent columns.
  4. Exchanging two rows flips the sign of the determinant.
  5. Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)
Field-by-field Comparison
Field Before After
Text <ol> <li>{{c1::\(\det(A) = \det(A^T)\)}}</li><li>\(\det(I) = {{c2::1}}\)</li><li>{{c3::\(\det(A) = 0\) if linearly dependent columns.}}</li><li>{{c4::Exchanging two rows flips the sign of the determinant.::Effect of row exchange?}}</li><li>{{c5::Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)::Subtraction}}</li></ol> <ol> <li>{{c1::\(\det(A) = \det(A^T)\)}}</li><li>\(\det(I) = {{c2::1}}\)</li><li>\(\det(A) = 0\) if {{c3::linearly dependent columns.}}</li><li>{{c4::Exchanging two rows flips the sign of the determinant.::Effect of row exchange?}}</li><li>{{c5::Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)::Subtraction}}</li></ol>
Tags: ETH::1._Semester::LinAlg::7._The_determinant::2._The_general_case::1._Properties

Note 15: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID: mP8DeYC`=|
modified

Before

Front

ETH::1._Semester::LinAlg::8._Eigenvalues_and_Eigenvectors::2._Introduction_to_Eigenvalues_and_Eigenvectors
A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is an eigenvector associated with the eigenvalue \(\lambda\) if and only if \(v \in N(A - \lambda I)\).

Back

ETH::1._Semester::LinAlg::8._Eigenvalues_and_Eigenvectors::2._Introduction_to_Eigenvalues_and_Eigenvectors
A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is an eigenvector associated with the eigenvalue \(\lambda\) if and only if \(v \in N(A - \lambda I)\).

After

Front

ETH::1._Semester::LinAlg::8._Eigenvalues_and_Eigenvectors::2._Introduction_to_Eigenvalues_and_Eigenvectors
A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is an eigenvector associated with the eigenvalue \(\lambda\) if and only if \(v \in N(A - \lambda I)\).

Back

ETH::1._Semester::LinAlg::8._Eigenvalues_and_Eigenvectors::2._Introduction_to_Eigenvalues_and_Eigenvectors
A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is an eigenvector associated with the eigenvalue \(\lambda\) if and only if \(v \in N(A - \lambda I)\).
Field-by-field Comparison
Field Before After
Text A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is {{c1::an eigenvector associated with the eigenvalue \(\lambda\)}} if and only if {{c2::\(v \in N(A - \lambda I)\)}}. A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is {{c1::an eigenvector associated with the eigenvalue \(\lambda\)}} if and only if {{c2::\(v \in N(A - \lambda I)\)::subspace}}.
Tags: ETH::1._Semester::LinAlg::8._Eigenvalues_and_Eigenvectors::2._Introduction_to_Eigenvalues_and_Eigenvectors

Note 16: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Classic
GUID: w{ro)4tDv:
modified

Before

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
Pseudoinverse of \(A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}\) (note it's already in the SVD form)?

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
Pseudoinverse of \(A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}\) (note it's already in the SVD form)?

Already in “SVD form” with \(U = I_2\), \(V = I_3\), and \(\Sigma = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{pmatrix}\) The pseudoinverse is: \[A^+ = \begin{pmatrix} \frac{1}{3} & 0 \ 0 & \frac{1}{2} \ 0 & 0 \end{pmatrix}\] Notice:
  • Shape flipped: \(A\) is \(2\times3\), so \(A^+\) is \(3\times2\)
  • Nonzero values inverted: \(3 \to \frac{1}{3}\), \(2 \to \frac{1}{2}\) 
  • Zeros stay zero

After

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
Pseudoinverse of \(A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}\)?

Hint: It's already in SVD-form.

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
Pseudoinverse of \(A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}\)?

Hint: It's already in SVD-form.

Already in “SVD form” with \(U = I_2\), \(V = I_3\), and \(\Sigma = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{pmatrix}\). 
The pseudoinverse is: \[A^\dagger = \begin{pmatrix} \frac{1}{3} & 0 \\ 0 & \frac{1}{2} \\ 0 & 0 \end{pmatrix}\] Notice:
  • Shape flipped: \(A\) is \(2\times3\), so \(A^\dagger\) is \(3\times2\)
  • Nonzero values inverted: \(3 \to \frac{1}{3}\), \(2 \to \frac{1}{2}\) 
  • Zeros stay zero
Field-by-field Comparison
Field Before After
Front Pseudoinverse of&nbsp;\(A = \begin{bmatrix} 3 &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 0 \end{bmatrix}\)&nbsp;(note it's already in the SVD form)? Pseudoinverse of&nbsp;\(A = \begin{bmatrix} 3 &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 0 \end{bmatrix}\)?<br><br>Hint: It's already in SVD-form.
Back <div>Already in “SVD form” with \(U = I_2\), \(V = I_3\), and \(\Sigma = \begin{pmatrix} 3 &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 0 \end{pmatrix}\) The pseudoinverse is: \[A^+ = \begin{pmatrix} \frac{1}{3} &amp; 0 \ 0 &amp; \frac{1}{2} \ 0 &amp; 0 \end{pmatrix}\] Notice:</div><div><ul><li>Shape flipped: \(A\) is \(2\times3\), so \(A^+\) is \(3\times2\)</li><li>Nonzero values inverted: \(3 \to \frac{1}{3}\), \(2 \to \frac{1}{2}\)&nbsp;</li><li>Zeros stay zero</li></ul></div> <div>Already in “SVD form” with \(U = I_2\), \(V = I_3\), and \(\Sigma = \begin{pmatrix} 3 &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 0 \end{pmatrix}\).&nbsp;</div><div>The pseudoinverse is: \[A^\dagger = \begin{pmatrix} \frac{1}{3} &amp; 0 \\ 0 &amp; \frac{1}{2} \\ 0 &amp; 0 \end{pmatrix}\] Notice:</div><div><ul><li>Shape flipped: \(A\) is \(2\times3\), so \(A^\dagger\) is \(3\times2\)</li><li>Nonzero values inverted: \(3 \to \frac{1}{3}\), \(2 \to \frac{1}{2}\)&nbsp;</li><li>Zeros stay zero</li></ul></div>
Tags: ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD

Note 17: ETH::LinAlg

Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID: xFvw{LdP48
modified

Before

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
In the SVD:
  1. \(\Sigma \in \mathbb{R}^{m \times n}\) is{{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.
  2. \(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal).
  3. The columns \(u_1, \dots, u_m\) of \(U\) are called the left-singular vectors of \(A\) and are orthonormal.
  4. The columns \(v_1, \dots, v_n\) of \(V\) are called the right-singular vectors of \(A\) and are orthonormal.

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
In the SVD:
  1. \(\Sigma \in \mathbb{R}^{m \times n}\) is{{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.
  2. \(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal).
  3. The columns \(u_1, \dots, u_m\) of \(U\) are called the left-singular vectors of \(A\) and are orthonormal.
  4. The columns \(v_1, \dots, v_n\) of \(V\) are called the right-singular vectors of \(A\) and are orthonormal.

After

Front

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
In the SVD:
  1. \(\Sigma \in \mathbb{R}^{m \times n}\) is {{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.
  2. \(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal).
  3. The columns \(u_1, \dots, u_m\) of \(U\) are called the left-singular vectors of \(A\) and are orthonormal.
  4. The columns \(v_1, \dots, v_n\) of \(V\) are called the right-singular vectors of \(A\) and are orthonormal.

Back

ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
In the SVD:
  1. \(\Sigma \in \mathbb{R}^{m \times n}\) is {{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.
  2. \(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal).
  3. The columns \(u_1, \dots, u_m\) of \(U\) are called the left-singular vectors of \(A\) and are orthonormal.
  4. The columns \(v_1, \dots, v_n\) of \(V\) are called the right-singular vectors of \(A\) and are orthonormal.
Field-by-field Comparison
Field Before After
Text In the SVD:<br><ol><li>\(\Sigma \in \mathbb{R}^{m \times n}\) is{{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.</li><li>{{c2::\(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal)::Property of V and U}}.</li><li>The columns \(u_1, \dots, u_m\) of \(U\) are called {{c3::the left-singular vectors of \(A\) and are orthonormal}}.</li><li>The columns \(v_1, \dots, v_n\) of \(V\) are called {{c3::the right-singular vectors of \(A\) and are orthonormal}}.</li></ol> In the SVD:<br><ol><li>\(\Sigma \in \mathbb{R}^{m \times n}\) is {{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.</li><li>{{c2::\(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal)::Property of V and U}}.</li><li>The columns \(u_1, \dots, u_m\) of \(U\) are called {{c3::the left-singular vectors of \(A\) and are orthonormal}}.</li><li>The columns \(v_1, \dots, v_n\) of \(V\) are called {{c3::the right-singular vectors of \(A\) and are orthonormal}}.</li></ol>
Tags: ETH::1._Semester::LinAlg::9._Diagonalisable_Matrices_and_the_SVD::3._SVD
↑ Top