The amortised runtime of union in the Union-Find DS is \(O(|V| \log |V|)\).
Note 1: ETH::A&D
Deck: ETH::A&D
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
hv)-{h!@?x
Before
Front
Back
The amortised runtime of union in the Union-Find DS is \(O(|V| \log |V|)\).
union takes \(\Theta(\min \{ |ZHK(u)| , |ZHK(v)| \}\). In the worst case, the minimum is \(|V| / 2\) as both have the same size.
Therefore over all loops, this would take \(O(|V| \log |V|)\) time, as on average we only take \(O(\log |V|)\) time.
The graph stays worst case, this is the average of the calls in the worst case.
Therefore over all loops, this would take \(O(|V| \log |V|)\) time, as on average we only take \(O(\log |V|)\) time.
The graph stays worst case, this is the average of the calls in the worst case.
After
Front
The amortised runtime of union in the Union-Find datastructure is \(O(|V| \log |V|)\).
Back
The amortised runtime of union in the Union-Find datastructure is \(O(|V| \log |V|)\).
Union takes \(\Theta(\min \{ |ZHK(u)| , |ZHK(v)| \}\). In the worst case, the minimum is \(|V| / 2\) as both have the same size.
Therefore over all loops, this would take \(O(|V| \log |V|)\) time, as on average we only take \(O(\log |V|)\) time.
The graph stays worst case, this is the average of the calls in the worst case.
Therefore over all loops, this would take \(O(|V| \log |V|)\) time, as on average we only take \(O(\log |V|)\) time.
The graph stays worst case, this is the average of the calls in the worst case.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | The amortised runtime of <b>union</b> in the Union-Find |
The amortised runtime of <b>union</b> in the Union-Find datastructure is {{c1:: \(O(|V| \log |V|)\)}}. |
| Extra | Union takes \(\Theta(\min \{ |ZHK(u)| , |ZHK(v)| \}\). In the worst case, the minimum is \(|V| / 2\) as both have the same size.<br><br>Therefore over all loops, this would take \(O(|V| \log |V|)\) time, as <i>on average</i> we only take \(O(\log |V|)\) time.<br><i>The graph stays worst case, this is the average of the calls in the worst case.</i> |
Note 2: ETH::A&D
Deck: ETH::A&D
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
|%{-v*KE>
Before
Front
The standard notation for \(|V|\) is \(n\) and for \(|E|\) is \(m\).
Back
The standard notation for \(|V|\) is \(n\) and for \(|E|\) is \(m\).
After
Front
The standard notation for \(|V|\) is \(n\) and for \(|E|\) is \(m\).
Back
The standard notation for \(|V|\) is \(n\) and for \(|E|\) is \(m\).
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | The standard notation for \(|V|\) is {{c1:: |
The standard notation for \(|V|\) is {{c1::\(n\)}} and for \(|E|\) is {{c1:: \(m\)}}. |
Note 3: ETH::DiskMat
Deck: ETH::DiskMat
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
Oj3Xy8Rn2M
Before
Front
Skolem normal form has no existance quantifiers.
It is equisatisfiable (not equivalent!) to the original formula.
It is equisatisfiable (not equivalent!) to the original formula.
Back
Skolem normal form has no existance quantifiers.
It is equisatisfiable (not equivalent!) to the original formula.
It is equisatisfiable (not equivalent!) to the original formula.
After
Front
Skolem normal form has no existence quantifiers.
It is equisatisfiable (not equivalent!) to the original formula.
It is equisatisfiable (not equivalent!) to the original formula.
Back
Skolem normal form has no existence quantifiers.
It is equisatisfiable (not equivalent!) to the original formula.
It is equisatisfiable (not equivalent!) to the original formula.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | Skolem normal form has {{c1::no exist |
Skolem normal form has {{c1::no existence quantifiers}}.<br>It is {{c2::<i>equisatisfiable</i> (not equivalent!)}} to the original formula. |
Note 4: ETH::DiskMat
Deck: ETH::DiskMat
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
Tn2Bx6Km4H
Before
Front
\(F \land F\) \(\equiv\) \( F\) and \(F \lor F\) \(\equiv\) \( F\).
Back
\(F \land F\) \(\equiv\) \( F\) and \(F \lor F\) \(\equiv\) \( F\).
(idempotence)
After
Front
\(F \land F\) \(\equiv\) \( F\) and \(F \lor F\) \(\equiv\) \( F\).
Back
\(F \land F\) \(\equiv\) \( F\) and \(F \lor F\) \(\equiv\) \( F\).
(idempotence)
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | {{c1::\(F \land F\) :: |
{{c1::\(F \land F\) ::<i>idempotence</i>}} \(\equiv\) {{c2:: \( F\)}} and {{c1::\(F \lor F\) ::<i>idempotence</i>}} \(\equiv\) {{c2:: \( F\)}}. |
Note 5: ETH::EProg
Deck: ETH::EProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
f2vR,E9IiI
Before
Front
Unary operators bind stronger than binary ones
Back
Unary operators bind stronger than binary ones
After
Front
Unary operators bind stronger than binary ones.
Back
Unary operators bind stronger than binary ones.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | Unary operators bind {{c1:: stronger}} than {{c2:: binary ones}} | Unary operators bind {{c1:: stronger}} than {{c2:: binary ones}}. |
Note 6: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
B~w={<:u.n
Before
Front
For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda B) = \lambda^n \det(B) \).
Back
For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda B) = \lambda^n \det(B) \).
Each row is scaled by \(\lambda\) and by multi-linearity we have to take it out of each one (n times)
After
Front
For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda A) = \lambda^n \det(A) \).
Back
For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda A) = \lambda^n \det(A) \).
Each row is scaled by \(\lambda\) and by multi-linearity we have to take it out of each one (n times).
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda |
For \(A \in \mathbb{R}^{n \times n}\) and \(\lambda \in \mathbb{R}\) we have \(\det(\lambda A) = {{c1:: \lambda^n \det(A) }}\). |
| Extra | Each row is scaled by \(\lambda\) and by multi-linearity we have to take it out of each one (n times) | Each row is scaled by \(\lambda\) and by multi-linearity we have to take it out of each one (n times). |
Note 7: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
C0VH)T^.1n
Before
Front
Using SVD we can decompose every matrix \(A \in \mathbb{R}^{n \times m}\) into \(A =\) \(U \Sigma V^\top\).
Back
Using SVD we can decompose every matrix \(A \in \mathbb{R}^{n \times m}\) into \(A =\) \(U \Sigma V^\top\).
After
Front
Using SVD we can decompose any matrix \(A \in \mathbb{R}^{n \times m}\) into \(A =\) \(U \Sigma V^\top\).
Back
Using SVD we can decompose any matrix \(A \in \mathbb{R}^{n \times m}\) into \(A =\) \(U \Sigma V^\top\).
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | Using SVD we can decompose {{c1:: |
Using SVD we can decompose {{c1::any}} matrix \(A \in \mathbb{R}^{n \times m}\) into \(A =\) {{c2::\(U \Sigma V^\top\)}}. |
Note 8: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
Fdf#%+wdU#
Before
Front
In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called the singular values of \(A\) and are ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\\)}}.
Back
In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called the singular values of \(A\) and are ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\\)}}.
After
Front
In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called the singular values of \(A\) and are {{c1:: ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\} }\)}}.
Back
In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called the singular values of \(A\) and are {{c1:: ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\} }\)}}.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called {{c1::the singular values}} of \(A\) and are {{c1:: ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\}}\)}}. | In the SVD the diagonal elements of \(\Sigma\), \(\sigma_i = \Sigma_{ii}\) are called {{c1::the singular values}} of \(A\) and are {{c1:: ordered as \(\sigma_1 \geq \dots \sigma_{\min\{m, n\} }\)}}. |
Note 9: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Classic
GUID:
modified
Note Type: Horvath Classic
GUID:
G(7.sQ=i_?
Before
Front
Proof that the Rayleigh Quotient has it's maximum and minimum at the largest/smallest EWs?
Back
Proof that the Rayleigh Quotient has it's maximum and minimum at the largest/smallest EWs?
Proof: It is easy to see that \(R(v_{\max}) = \lambda_{\max}\) and \(R(v_{\min}) = \lambda_{\min}\). See \(R(v_{\text{max}}) = \frac{v_{\text{max}}^\top A v_{\text{max}}}{v_{\text{max}}^\top v_{\text{max}}} = \frac{v_{\text{max}}^\top (\lambda_{\text{max}} v_{\text{max}})}{v_{\text{max}}^\top v_{\text{max}}} = \lambda_{\text{max}}\).
After
Front
Proof that the Rayleigh Quotient has it's maximum and minimum at the largest/smallest EWs?
Back
Proof that the Rayleigh Quotient has it's maximum and minimum at the largest/smallest EWs?
It is easy to see that \(R(v_{\max}) = \lambda_{\max}\) and \(R(v_{\min}) = \lambda_{\min}\).
See:
\(R(v_{\text{max}}) = \frac{v_{\text{max}}^\top A v_{\text{max}}}{v_{\text{max}}^\top v_{\text{max}}} = \frac{v_{\text{max}}^\top (\lambda_{\text{max}} v_{\text{max}})}{v_{\text{max}}^\top v_{\text{max}}} = \lambda_{\text{max}}\)
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Back | <div>It is easy to see that \(R(v_{\max}) = \lambda_{\max}\) and \(R(v_{\min}) = \lambda_{\min}\). </div><div><br></div><div>See: </div><div>\(R(v_{\text{max}}) = \frac{v_{\text{max}}^\top A v_{\text{max}}}{v_{\text{max}}^\top v_{\text{max}}} = \frac{v_{\text{max}}^\top (\lambda_{\text{max}} v_{\text{max}})}{v_{\text{max}}^\top v_{\text{max}}} = \lambda_{\text{max}}\)</div> |
Note 10: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
K=a-HUOVwC
Before
Front
Three equivalent statements:
- {{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.}}
- There is an \(m \times m\) matrix \(B\) such that \(BA = I\).
- The columns of \(A\) are linearly independent.
Back
Three equivalent statements:
- {{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.}}
- There is an \(m \times m\) matrix \(B\) such that \(BA = I\).
- The columns of \(A\) are linearly independent.
The third one can be derived from the fact that if \(BA = I\), there is only a single \(x \in \mathbb{R}^m\) such that \(A \textbf{x} = 0\).
It is also intuitively clear that if not all columns were linearly independent, we'd actually have a tall linear transformation and would be losing information.
It is also intuitively clear that if not all columns were linearly independent, we'd actually have a tall linear transformation and would be losing information.
After
Front
Three equivalent statements:
- {{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.::Transformation}}
- There is an \(m \times m\) matrix \(B\) such that \(BA = I\).
- The columns of \(A\) are linearly independent.
Back
Three equivalent statements:
- {{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.::Transformation}}
- There is an \(m \times m\) matrix \(B\) such that \(BA = I\).
- The columns of \(A\) are linearly independent.
The third one can be derived from the fact that if \(BA = I\), there is only a single \(x \in \mathbb{R}^m\) such that \(A \textbf{x} = 0\).
It is also intuitively clear that if not all columns were linearly independent, we'd actually have a tall linear transformation and would be losing information.
It is also intuitively clear that if not all columns were linearly independent, we'd actually have a tall linear transformation and would be losing information.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | Three equivalent statements:<br><ol><li>{{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.}}</li><li>{{c2::There is an \(m \times m\) matrix \(B\) such that \(BA = I\).}}</li><li>{{c3::The columns of \(A\) are linearly independent.}}</li></ol> | Three equivalent statements:<br><ol><li>{{c1::\(T_A : \mathbb{R}^m \rightarrow \mathbb{R}^m\) is bijective.::Transformation}}</li><li>{{c2::There is an \(m \times m\) matrix \(B\) such that \(BA = I\).}}</li><li>{{c3::The columns of \(A\) are linearly independent.}}</li></ol> |
Note 11: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
My,;4A;?fH
Before
Front
If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Square easier.
We can convert any \(A\) to have orthogonal columns by making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis.
Back
If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Square easier.
We can convert any \(A\) to have orthogonal columns by making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis.
After
Front
If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Squares easier.
We can convert any \(A\) to have orthogonal columns by making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis.
Back
If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Squares easier.
We can convert any \(A\) to have orthogonal columns by making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | <div>If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Square easier.</div><div><br></div><div>We can convert any \(A\) to have orthogonal columns by {{c1:: making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis}}.</div> | <div>If the columns of \(A\) are pairwise orthogonal, we get \(A^\top A\) a diagonal matrix which is very easy to invert, i.e. makes Least Squares easier.</div><div><br></div><div>We can convert any \(A\) to have orthogonal columns by {{c1:: making sure that the sum of all the \(t_k = 0\), which can be achieved by shifting the graph on the x-axis}}.</div> |
Note 12: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
Oow<}IKdC,
Before
Front
Given a matrix \(A \in \mathbb{R}^{n \times n}\), then \[ \det(A) = \det(A^\top) \]
Back
Given a matrix \(A \in \mathbb{R}^{n \times n}\), then \[ \det(A) = \det(A^\top) \]
This follows from the fact that the inverse of a permutation has the same sign, and transposing is the same as doing the inverse permutation.
After
Front
Given a matrix \(A \in \mathbb{R}^{n \times n}\), then:
\[ \det(A) = \det(A^\top) \]
\[ \det(A) = \det(A^\top) \]
Back
Given a matrix \(A \in \mathbb{R}^{n \times n}\), then:
\[ \det(A) = \det(A^\top) \]
\[ \det(A) = \det(A^\top) \]
This follows from the fact that the inverse of a permutation has the same sign, and transposing is the same as doing the inverse permutation.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | Given a matrix \(A \in \mathbb{R}^{n \times n}\), then |
Given a matrix \(A \in \mathbb{R}^{n \times n}\), then:<br> \[ {{c1::\det(A)}} = \det(A^\top) \] |
Note 13: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
dZ)aTr>2eb
Before
Front
Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the non-zero eigenvalues of \(A^\top A\) are the same ones of \(AA^\top\). Proof Included
Back
Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the non-zero eigenvalues of \(A^\top A\) are the same ones of \(AA^\top\). Proof Included
Shared EWs: For \((A^\top A)v_k = \lambda_k v_k\) we get \(AA^\top A v_k = \lambda_k Av_k\) and thus \(Av_k\) EV and \(\lambda_k\) is an EW of \(AA^\top\).
Orthogonality: For \(j \neq k\) we have \((Av_j)^\top (Av_k) = v_j^\top A^\top Av_k = v_j^\top \lambda_k v_k = \lambda_k v_j^\top v_k = 0\)
Orthogonality: For \(j \neq k\) we have \((Av_j)^\top (Av_k) = v_j^\top A^\top Av_k = v_j^\top \lambda_k v_k = \lambda_k v_j^\top v_k = 0\)
After
Front
Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the non-zero eigenvalues of \(A^\top A\) are the same ones as of \(AA^\top\). Proof Included
Back
Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the non-zero eigenvalues of \(A^\top A\) are the same ones as of \(AA^\top\). Proof Included
Shared EWs: For \((A^\top A)v_k = \lambda_k v_k\) we get \(AA^\top A v_k = \lambda_k Av_k\) and thus \(Av_k\) EV and \(\lambda_k\) is an EW of \(AA^\top\).
Orthogonality: For \(j \neq k\) we have \((Av_j)^\top (Av_k) = v_j^\top A^\top Av_k = v_j^\top \lambda_k v_k = \lambda_k v_j^\top v_k = 0\)
Orthogonality: For \(j \neq k\) we have \((Av_j)^\top (Av_k) = v_j^\top A^\top Av_k = v_j^\top \lambda_k v_k = \lambda_k v_j^\top v_k = 0\)
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | <div>Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the {{c1::non-zero eigenvalues}} of {{c2::\(A^\top A\) are the same ones |
<div>Given a real matrix \(A \in \mathbb{R}^{n \times n}\), the {{c1::non-zero eigenvalues}} of {{c2::\(A^\top A\)}} are the same ones as of {{c2::\(AA^\top\)}}. <i>Proof Included</i></div> |
Note 14: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
hy_&_By5dp
Before
Front
- \(\det(A) = \det(A^T)\)
- \(\det(I) = 1\)
- \(\det(A) = 0\) if linearly dependent columns.
- Exchanging two rows flips the sign of the determinant.
- Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)
Back
- \(\det(A) = \det(A^T)\)
- \(\det(I) = 1\)
- \(\det(A) = 0\) if linearly dependent columns.
- Exchanging two rows flips the sign of the determinant.
- Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)
After
Front
- \(\det(A) = \det(A^T)\)
- \(\det(I) = 1\)
- \(\det(A) = 0\) if linearly dependent columns.
- Exchanging two rows flips the sign of the determinant.
- Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)
Back
- \(\det(A) = \det(A^T)\)
- \(\det(I) = 1\)
- \(\det(A) = 0\) if linearly dependent columns.
- Exchanging two rows flips the sign of the determinant.
- Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | <ol>
<li>{{c1::\(\det(A) = \det(A^T)\)}}</li><li>\(\det(I) = {{c2::1}}\)</li><li> |
<ol> <li>{{c1::\(\det(A) = \det(A^T)\)}}</li><li>\(\det(I) = {{c2::1}}\)</li><li>\(\det(A) = 0\) if {{c3::linearly dependent columns.}}</li><li>{{c4::Exchanging two rows flips the sign of the determinant.::Effect of row exchange?}}</li><li>{{c5::Subtracting two rows does not change the \(\det\). (we can use Gauss-Jordan (only row substractions) to simplify calculations…)::Subtraction}}</li></ol> |
Note 15: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
mP8DeYC`=|
Before
Front
A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is an eigenvector associated with the eigenvalue \(\lambda\) if and only if \(v \in N(A - \lambda I)\).
Back
A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is an eigenvector associated with the eigenvalue \(\lambda\) if and only if \(v \in N(A - \lambda I)\).
After
Front
A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is an eigenvector associated with the eigenvalue \(\lambda\) if and only if \(v \in N(A - \lambda I)\).
Back
A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is an eigenvector associated with the eigenvalue \(\lambda\) if and only if \(v \in N(A - \lambda I)\).
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is {{c1::an eigenvector associated with the eigenvalue \(\lambda\)}} if and only if {{c2::\(v \in N(A - \lambda I)\)}}. | A vector \(v \in \mathbb{R}^n \setminus \{0\}\) is {{c1::an eigenvector associated with the eigenvalue \(\lambda\)}} if and only if {{c2::\(v \in N(A - \lambda I)\)::subspace}}. |
Note 16: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Classic
GUID:
modified
Note Type: Horvath Classic
GUID:
w{ro)4tDv:
Before
Front
Pseudoinverse of \(A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}\) (note it's already in the SVD form)?
Back
Pseudoinverse of \(A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}\) (note it's already in the SVD form)?
Already in “SVD form” with \(U = I_2\), \(V = I_3\), and \(\Sigma = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{pmatrix}\) The pseudoinverse is: \[A^+ = \begin{pmatrix} \frac{1}{3} & 0 \ 0 & \frac{1}{2} \ 0 & 0 \end{pmatrix}\] Notice:
- Shape flipped: \(A\) is \(2\times3\), so \(A^+\) is \(3\times2\)
- Nonzero values inverted: \(3 \to \frac{1}{3}\), \(2 \to \frac{1}{2}\)
- Zeros stay zero
After
Front
Pseudoinverse of \(A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}\)?
Hint: It's already in SVD-form.
Hint: It's already in SVD-form.
Back
Pseudoinverse of \(A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}\)?
Hint: It's already in SVD-form.
Hint: It's already in SVD-form.
Already in “SVD form” with \(U = I_2\), \(V = I_3\), and \(\Sigma = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{pmatrix}\).
The pseudoinverse is: \[A^\dagger = \begin{pmatrix} \frac{1}{3} & 0 \\ 0 & \frac{1}{2} \\ 0 & 0 \end{pmatrix}\] Notice:
- Shape flipped: \(A\) is \(2\times3\), so \(A^\dagger\) is \(3\times2\)
- Nonzero values inverted: \(3 \to \frac{1}{3}\), \(2 \to \frac{1}{2}\)
- Zeros stay zero
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Front | Pseudoinverse of \(A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}\) |
Pseudoinverse of \(A = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}\)?<br><br>Hint: It's already in SVD-form. |
| Back | <div>Already in “SVD form” with \(U = I_2\), \(V = I_3\), and \(\Sigma = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{pmatrix}\) |
<div>Already in “SVD form” with \(U = I_2\), \(V = I_3\), and \(\Sigma = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \end{pmatrix}\). </div><div>The pseudoinverse is: \[A^\dagger = \begin{pmatrix} \frac{1}{3} & 0 \\ 0 & \frac{1}{2} \\ 0 & 0 \end{pmatrix}\] Notice:</div><div><ul><li>Shape flipped: \(A\) is \(2\times3\), so \(A^\dagger\) is \(3\times2\)</li><li>Nonzero values inverted: \(3 \to \frac{1}{3}\), \(2 \to \frac{1}{2}\) </li><li>Zeros stay zero</li></ul></div> |
Note 17: ETH::LinAlg
Deck: ETH::LinAlg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
xFvw{LdP48
Before
Front
In the SVD:
- \(\Sigma \in \mathbb{R}^{m \times n}\) is{{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.
- \(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal).
- The columns \(u_1, \dots, u_m\) of \(U\) are called the left-singular vectors of \(A\) and are orthonormal.
- The columns \(v_1, \dots, v_n\) of \(V\) are called the right-singular vectors of \(A\) and are orthonormal.
Back
In the SVD:
- \(\Sigma \in \mathbb{R}^{m \times n}\) is{{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.
- \(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal).
- The columns \(u_1, \dots, u_m\) of \(U\) are called the left-singular vectors of \(A\) and are orthonormal.
- The columns \(v_1, \dots, v_n\) of \(V\) are called the right-singular vectors of \(A\) and are orthonormal.
After
Front
In the SVD:
- \(\Sigma \in \mathbb{R}^{m \times n}\) is {{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.
- \(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal).
- The columns \(u_1, \dots, u_m\) of \(U\) are called the left-singular vectors of \(A\) and are orthonormal.
- The columns \(v_1, \dots, v_n\) of \(V\) are called the right-singular vectors of \(A\) and are orthonormal.
Back
In the SVD:
- \(\Sigma \in \mathbb{R}^{m \times n}\) is {{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.
- \(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal).
- The columns \(u_1, \dots, u_m\) of \(U\) are called the left-singular vectors of \(A\) and are orthonormal.
- The columns \(v_1, \dots, v_n\) of \(V\) are called the right-singular vectors of \(A\) and are orthonormal.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | In the SVD:<br><ol><li>\(\Sigma \in \mathbb{R}^{m \times n}\) is{{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.</li><li>{{c2::\(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal)::Property of V and U}}.</li><li>The columns \(u_1, \dots, u_m\) of \(U\) are called {{c3::the left-singular vectors of \(A\) and are orthonormal}}.</li><li>The columns \(v_1, \dots, v_n\) of \(V\) are called {{c3::the right-singular vectors of \(A\) and are orthonormal}}.</li></ol> | In the SVD:<br><ol><li>\(\Sigma \in \mathbb{R}^{m \times n}\) is {{c1::a diagonal matrix (in the sense that \(\Sigma_{ij} = 0\) when \(i \neq j\) and the diagonal values are non-negative and ordered in descending order)}}.</li><li>{{c2::\(U^\top U = I\) and \(V^\top V = I\) (\(U, V\) are orthogonal)::Property of V and U}}.</li><li>The columns \(u_1, \dots, u_m\) of \(U\) are called {{c3::the left-singular vectors of \(A\) and are orthonormal}}.</li><li>The columns \(v_1, \dots, v_n\) of \(V\) are called {{c3::the right-singular vectors of \(A\) and are orthonormal}}.</li></ol> |