Anki Deck Changes

Commit: ecd6b855 - digga waun i nu an em dash siag daschiaß i wen

Author: lhorva <lhorva@student.ethz.ch>

Date: 2026-04-01T03:13:56+02:00

Changes: 16 note(s) changed (0 added, 16 modified, 0 deleted)

ℹ️ Cosmetic Changes Hidden: 3 note(s) had formatting-only changes and are not shown below • 3 HTML formatting changes

Note 1: ETH::2. Semester::A&W

Deck: ETH::2. Semester::A&W
Note Type: Horvath Cloze
GUID: vcLegjCIrE
modified

Before

Front

basic ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen::0._Kombinatorik
Jede Menge \(A\) lässt sich bezüglich eines Ereignisses \(B\) aufteilen:
\[A = (A \cap B) \cup (A \cap B^c),\]
wobei die zwei Teile disjunkt sind. Daher:
\[|A| = |A \cap B| + |A \cap B^c|\]

Back

basic ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen::0._Kombinatorik
Jede Menge \(A\) lässt sich bezüglich eines Ereignisses \(B\) aufteilen:
\[A = (A \cap B) \cup (A \cap B^c),\]
wobei die zwei Teile disjunkt sind. Daher:
\[|A| = |A \cap B| + |A \cap B^c|\]

Anwendung: Um \(|A|\) zu zählen, zähle separat „\(A\) und \(B\) tritt ein“ und
„\(A\) und \(B\) tritt nicht ein“.
Analogon: Satz der totalen Wahrscheinlichkeit \(\Pr[A] = \Pr[A|B]\Pr[B] + \Pr[A|B^c]\Pr[B^c]\).

After

Front

basic ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen::0._Kombinatorik
Jede Menge \(A\) lässt sich bezüglich eines Ereignisses \(B\) aufteilen:
\[A = (A \cap B) \cup (A \cap B^c),\]wobei die zwei Teile disjunkt sind. Daher:
\[|A| = |A \cap B| + |A \cap B^c|\]

Back

basic ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen::0._Kombinatorik
Jede Menge \(A\) lässt sich bezüglich eines Ereignisses \(B\) aufteilen:
\[A = (A \cap B) \cup (A \cap B^c),\]wobei die zwei Teile disjunkt sind. Daher:
\[|A| = |A \cap B| + |A \cap B^c|\]

Anwendung:
Um \(|A|\) zu zählen, zähle separat „\(A\) und \(B\) tritt ein“ und „\(A\) und \(B\) tritt nicht ein“.

Analog:
Satz der totalen Wahrscheinlichkeit \(\Pr[A] = \Pr[A|B]\Pr[B] + \Pr[A|B^c]\Pr[B^c]\).
Field-by-field Comparison
Field Before After
Text Jede Menge \(A\) lässt sich bezüglich eines Ereignisses \(B\) aufteilen:<br>\[A = {{c1::(A \cap B) \cup (A \cap B^c)}},\]<br>wobei die zwei Teile {{c3::disjunkt :: set property}} sind. Daher:<br>\[|A| = {{c2::|A \cap B| + |A \cap B^c|}}\] Jede Menge \(A\) lässt sich bezüglich eines Ereignisses \(B\) aufteilen:<br>\[A = {{c1::(A \cap B) \cup (A \cap B^c)}},\]wobei die zwei Teile {{c3::disjunkt::set property}} sind. Daher:<br>\[|A| = {{c2::|A \cap B| + |A \cap B^c|}}\]
Extra Anwendung: Um \(|A|\) zu zählen, zähle separat „\(A\) und \(B\) tritt ein“ und<br>„\(A\) und \(B\) tritt nicht ein“.<br>Analogon: Satz der totalen Wahrscheinlichkeit \(\Pr[A] = \Pr[A|B]\Pr[B] + \Pr[A|B^c]\Pr[B^c]\). <b>Anwendung:</b> <br>Um \(|A|\) zu zählen, zähle separat „\(A\) und \(B\) tritt ein“ und „\(A\) und \(B\) tritt nicht ein“.<br><br>Analog: <br>Satz der totalen Wahrscheinlichkeit \(\Pr[A] = \Pr[A|B]\Pr[B] + \Pr[A|B^c]\Pr[B^c]\).
Tags: ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen::0._Kombinatorik basic

Note 2: ETH::2. Semester::A&W

Deck: ETH::2. Semester::A&W
Note Type: Horvath Cloze
GUID: y9hMubH@3j
modified

Before

Front

basic ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen::0._Kombinatorik
Der Binomialkoeffizient \(\binom{n}{k}\) zählt die Anzahl der
\(k\)-elementigen Teilmengen einer \(n\)-elementigen Menge:
\[\binom{n}{k} = {{c2::\frac{n!}{k!\,(n-k)!} }}\]

Back

basic ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen::0._Kombinatorik
Der Binomialkoeffizient \(\binom{n}{k}\) zählt die Anzahl der
\(k\)-elementigen Teilmengen einer \(n\)-elementigen Menge:
\[\binom{n}{k} = {{c2::\frac{n!}{k!\,(n-k)!} }}\]


Beziehung zu geordneter Auswahl: \(\binom{n}{k} = \frac{P(n,k)}{k!}\)
(durch \(k!\) teilen, weil Reihenfolge egal).

Definiert für \(0 \leq k \leq n\), sonst \(0\).

After

Front

basic ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen::0._Kombinatorik
Der Binomialkoeffizient \(\binom{n}{k}\) zählt die Anzahl der \(k\)-elementigen Teilmengen einer \(n\)-elementigen Menge:
\[\binom{n}{k} = {{c2::\frac{n!}{k!\,(n-k)!} }}\]

Back

basic ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen::0._Kombinatorik
Der Binomialkoeffizient \(\binom{n}{k}\) zählt die Anzahl der \(k\)-elementigen Teilmengen einer \(n\)-elementigen Menge:
\[\binom{n}{k} = {{c2::\frac{n!}{k!\,(n-k)!} }}\]

Beziehung zu geordneter Auswahl: \(\binom{n}{k} = \frac{P(n,k)}{k!}\)
(durch \(k!\) teilen, weil Reihenfolge egal).

Definiert für \(0 \leq k \leq n\), sonst \(0\).
Field-by-field Comparison
Field Before After
Text Der {{c1::Binomialkoeffizient}}&nbsp;\(\binom{n}{k}\) zählt die Anzahl der<br>\(k\)-elementigen Teilmengen einer \(n\)-elementigen Menge:<br>\[\binom{n}{k} = {{c2::\frac{n!}{k!\,(n-k)!} }}\]<br><br> Der {{c1::Binomialkoeffizient}}&nbsp;\(\binom{n}{k}\) zählt die Anzahl der&nbsp;\(k\)-elementigen Teilmengen einer \(n\)-elementigen Menge:<br>\[\binom{n}{k} = {{c2::\frac{n!}{k!\,(n-k)!} }}\]
Tags: ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen::0._Kombinatorik basic

Note 3: ETH::2. Semester::Analysis

Deck: ETH::2. Semester::Analysis
Note Type: Horvath Cloze
GUID: a96Uw3jLN`
modified

Before

Front

ETH::2._Semester::Analysis::3._Reihen::4._Umordnung
Umordnungssatz für absolut konvergente Reihen (Dirichlet): Sei \(\sum a_n\) {{c1:absolut konvergent und \(\phi: \mathbb{N}_0 \to \mathbb{N}_0\) eine Bijektion}}.

Dann {{c2::konvergiert \(\sum a_{\phi(n)}\) ebenfalls absolut und:
\[\sum_{n=0}^\infty a_n = \sum_{n=0}^\infty a_{\phi(n)}\]}}

Back

ETH::2._Semester::Analysis::3._Reihen::4._Umordnung
Umordnungssatz für absolut konvergente Reihen (Dirichlet): Sei \(\sum a_n\) {{c1:absolut konvergent und \(\phi: \mathbb{N}_0 \to \mathbb{N}_0\) eine Bijektion}}.

Dann {{c2::konvergiert \(\sum a_{\phi(n)}\) ebenfalls absolut und:
\[\sum_{n=0}^\infty a_n = \sum_{n=0}^\infty a_{\phi(n)}\]}}

Merke: Bei absolut konvergenten Reihen darf man frei umordnen.

After

Front

ETH::2._Semester::Analysis::3._Reihen::4._Umordnung
Sei \(\sum a_n\) {{c1::absolut konvergent und \(\phi: \mathbb{N}_0 \to \mathbb{N}_0\) eine Bijektion}}.

Dann {{c2::konvergiert \(\sum a_{\phi(n)}\) ebenfalls absolut und:
\[\sum_{n=0}^\infty a_n = \sum_{n=0}^\infty a_{\phi(n)}\]}}

Back

ETH::2._Semester::Analysis::3._Reihen::4._Umordnung
Sei \(\sum a_n\) {{c1::absolut konvergent und \(\phi: \mathbb{N}_0 \to \mathbb{N}_0\) eine Bijektion}}.

Dann {{c2::konvergiert \(\sum a_{\phi(n)}\) ebenfalls absolut und:
\[\sum_{n=0}^\infty a_n = \sum_{n=0}^\infty a_{\phi(n)}\]}}

Umordnungssatz für absolut konvergente Reihen (Dirichlet)

Merke:
Bei absolut konvergenten Reihen darf man frei umordnen.
Field-by-field Comparison
Field Before After
Text Umordnungssatz für absolut konvergente Reihen (<b>Dirichlet</b>): Sei&nbsp;\(\sum a_n\)&nbsp;{{c1:<b>absolut konvergent</b>&nbsp;und&nbsp;\(\phi: \mathbb{N}_0 \to \mathbb{N}_0\)&nbsp;eine Bijektion}}.<br><br>Dann {{c2::konvergiert&nbsp;\(\sum a_{\phi(n)}\)&nbsp;ebenfalls&nbsp;<b>absolut</b>&nbsp;und:<br>\[\sum_{n=0}^\infty a_n = \sum_{n=0}^\infty a_{\phi(n)}\]}}<br> Sei&nbsp;\(\sum a_n\)&nbsp;{{c1::<b>absolut konvergent</b>&nbsp;und&nbsp;\(\phi: \mathbb{N}_0 \to \mathbb{N}_0\)&nbsp;eine Bijektion}}.<br><br>Dann {{c2::konvergiert&nbsp;\(\sum a_{\phi(n)}\)&nbsp;ebenfalls&nbsp;<b>absolut</b>&nbsp;und:<br>\[\sum_{n=0}^\infty a_n = \sum_{n=0}^\infty a_{\phi(n)}\]}}
Extra <b>Merke:</b> Bei absolut konvergenten Reihen darf man frei umordnen. Umordnungssatz für absolut konvergente Reihen (<b>Dirichlet</b>)<b><br><br>Merke:</b> Bei absolut konvergenten Reihen darf man frei umordnen.
Tags: ETH::2._Semester::Analysis::3._Reihen::4._Umordnung

Note 4: ETH::2. Semester::Analysis

Deck: ETH::2. Semester::Analysis
Note Type: Horvath Cloze
GUID: g2t@/gu5sl
modified

Before

Front

ETH::2._Semester::Analysis::3._Reihen::2._Standard_Reihen
Teleskopreihe: Sei \(a_k = b_k - b_{k-1}\). Dann:
\[\sum_{k=1}^n a_k = b_n - b_0\]

Back

ETH::2._Semester::Analysis::3._Reihen::2._Standard_Reihen
Teleskopreihe: Sei \(a_k = b_k - b_{k-1}\). Dann:
\[\sum_{k=1}^n a_k = b_n - b_0\]


Beispiel: \(\displaystyle\sum_{k=1}^\infty \frac{1}{k(k+1)} = \sum_{k=1}^\infty \left(\frac{1}{k} - \frac{1}{k+1}\right) = 1\)

After

Front

ETH::2._Semester::Analysis::3._Reihen::2._Standard_Reihen
Sei \(a_k = b_k - b_{k-1}\). Dann:
\[\sum_{k=1}^n a_k = b_n - b_0\]

Back

ETH::2._Semester::Analysis::3._Reihen::2._Standard_Reihen
Sei \(a_k = b_k - b_{k-1}\). Dann:
\[\sum_{k=1}^n a_k = b_n - b_0\]

(Teleskopreihe)

Beispiel:
\(\displaystyle\sum_{k=1}^\infty \frac{1}{k(k+1)} = \sum_{k=1}^\infty \left(\frac{1}{k} - \frac{1}{k+1}\right) = 1\)
Field-by-field Comparison
Field Before After
Text Teleskopreihe: Sei \(a_k = b_k - b_{k-1}\). Dann:<br>\[\sum_{k=1}^n a_k = {{c1::b_n - b_0}}\]<br> Sei \(a_k = b_k - b_{k-1}\). Dann:<br>\[\sum_{k=1}^n a_k = {{c1::b_n - b_0}}\]
Extra <br><b>Beispiel:</b>&nbsp;\(\displaystyle\sum_{k=1}^\infty \frac{1}{k(k+1)} = \sum_{k=1}^\infty \left(\frac{1}{k} - \frac{1}{k+1}\right) = 1\) (Teleskopreihe)<br><br><b>Beispiel:<br></b>\(\displaystyle\sum_{k=1}^\infty \frac{1}{k(k+1)} = \sum_{k=1}^\infty \left(\frac{1}{k} - \frac{1}{k+1}\right) = 1\)
Tags: ETH::2._Semester::Analysis::3._Reihen::2._Standard_Reihen

Note 5: ETH::2. Semester::Analysis

Deck: ETH::2. Semester::Analysis
Note Type: Horvath Cloze
GUID: kHI,;p9S^}
modified

Before

Front

ETH::2._Semester::Analysis::3._Reihen::3._Konvergenzkriterien
Wurzelkriterium:
  • wenn \((c_k)^{1/k}\) nicht beschränkt ist setzen wir \(\rho = 0\) - also konvergiert die Reihe nur für \(z = 0\) 
  • wenn \((c_k)^{1/k}\) beschränkt ist und \(\limsup (c_k)^{1/k} = 0\) setzen wir \(\rho = \infty\) - die Reihe konvergiert für alle \(z\)

Back

ETH::2._Semester::Analysis::3._Reihen::3._Konvergenzkriterien
Wurzelkriterium:
  • wenn \((c_k)^{1/k}\) nicht beschränkt ist setzen wir \(\rho = 0\) - also konvergiert die Reihe nur für \(z = 0\) 
  • wenn \((c_k)^{1/k}\) beschränkt ist und \(\limsup (c_k)^{1/k} = 0\) setzen wir \(\rho = \infty\) - die Reihe konvergiert für alle \(z\)

After

Front

ETH::2._Semester::Analysis::3._Reihen::3._Konvergenzkriterien
Wurzelkriterium:
  • wenn \((c_k)^{1/k}\) nicht beschränkt ist, setzen wir \(\rho = 0\) - also konvergiert die Reihe nur für \(z = 0\) 
  • wenn \((c_k)^{1/k}\) beschränkt ist und \(\limsup (c_k)^{1/k} = 0\), setzen wir \(\rho = \infty\) - die Reihe konvergiert für alle \(z\)

Back

ETH::2._Semester::Analysis::3._Reihen::3._Konvergenzkriterien
Wurzelkriterium:
  • wenn \((c_k)^{1/k}\) nicht beschränkt ist, setzen wir \(\rho = 0\) - also konvergiert die Reihe nur für \(z = 0\) 
  • wenn \((c_k)^{1/k}\) beschränkt ist und \(\limsup (c_k)^{1/k} = 0\), setzen wir \(\rho = \infty\) - die Reihe konvergiert für alle \(z\)
Field-by-field Comparison
Field Before After
Text <b>Wurzelkriterium</b>:<br><ul><li>wenn \((c_k)^{1/k}\) nicht beschränkt ist setzen wir {{c1:: \(\rho = 0\) - also konvergiert die Reihe nur für \(z = 0\)}}&nbsp;</li><li>wenn \((c_k)^{1/k}\) beschränkt ist und \(\limsup (c_k)^{1/k} = 0\) setzen wir {{c2::\(\rho = \infty\) - die Reihe konvergiert für alle \(z\)}}</li></ul> <b>Wurzelkriterium</b>:<br><ul><li>wenn \((c_k)^{1/k}\) nicht beschränkt ist, setzen wir \(\rho = {{c1::0}}\){{c1::&nbsp;- also konvergiert die Reihe nur für \(z = 0\)}}&nbsp;</li><li>wenn \((c_k)^{1/k}\) beschränkt ist und \(\limsup (c_k)^{1/k} = 0\), setzen wir \(\rho ={{c1:: \infty}}\){{c1::&nbsp;- die Reihe konvergiert für alle \(z\)}}</li></ul>
Tags: ETH::2._Semester::Analysis::3._Reihen::3._Konvergenzkriterien

Note 6: ETH::2. Semester::Analysis

Deck: ETH::2. Semester::Analysis
Note Type: Horvath Cloze
GUID: sRcr3UsY+a
modified

Before

Front

ETH::2._Semester::Analysis::2._Folgen::2._Konvergenz::2._Properties
Jede konvergente Folge ist beschränkt (Weierstrass).

Back

ETH::2._Semester::Analysis::2._Folgen::2._Konvergenz::2._Properties
Jede konvergente Folge ist beschränkt (Weierstrass).

Beachte: Die Umkehrung gilt nicht — eine beschränkte Folge muss nicht konvergieren (z.B. \((-1)^n\)).

After

Front

ETH::2._Semester::Analysis::2._Folgen::2._Konvergenz::2._Properties
Jede konvergente Folge ist beschränkt.

Back

ETH::2._Semester::Analysis::2._Folgen::2._Konvergenz::2._Properties
Jede konvergente Folge ist beschränkt.

(Weierstrass)

Beachte:
 
Die Umkehrung gilt nicht - eine beschränkte Folge muss nicht konvergieren (z.B. \((-1)^n\)).
Field-by-field Comparison
Field Before After
Text Jede <b>konvergente</b> Folge ist {{c1::beschränkt (Weierstrass)}}. Jede <b>konvergente</b> Folge ist {{c1::beschränkt}}.
Extra <b>Beachte:</b>&nbsp;Die Umkehrung gilt nicht eine beschränkte Folge muss nicht konvergieren (z.B.&nbsp;\((-1)^n\)). (Weierstrass)<b><br><br>Beachte:</b>&nbsp;<br>Die Umkehrung gilt nicht - eine beschränkte Folge muss nicht konvergieren (z.B.&nbsp;\((-1)^n\)).
Tags: ETH::2._Semester::Analysis::2._Folgen::2._Konvergenz::2._Properties

Note 7: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Classic
GUID: 3fV+]~d+w}
modified

Before

Front

ETH::2._Semester::PProg::10._Parallel_Algorithms::3._Parallel_algorithms
What are the work and span of parallel merge sort, and what limits its parallelism?

Back

ETH::2._Semester::PProg::10._Parallel_Algorithms::3._Parallel_algorithms
What are the work and span of parallel merge sort, and what limits its parallelism?

  • Work: \(O(n \log n)\) — same as sequential
  • Span: \(O(\log^2 n)\) — recursive calls parallelized (O(log n) depth) but merge step takes O(log n) span via parallel merge
  • Parallelism: \(O(n/\log n)\)
Bottleneck: the merge step. A sequential merge gives O(n) span, limiting parallelism to O(log n).

After

Front

ETH::2._Semester::PProg::10._Parallel_Algorithms::3._Parallel_algorithms
What are the work and span of parallel merge sort, and what limits its parallelism?

Back

ETH::2._Semester::PProg::10._Parallel_Algorithms::3._Parallel_algorithms
What are the work and span of parallel merge sort, and what limits its parallelism?

  • Work: \(O(n \log n)\) - same as sequential
  • Span: \(O(\log^2 n)\) - recursive calls parallelized (O(log n) depth) but merge step takes O(log n) span via parallel merge
  • Parallelism: \(O(n/\log n)\)
Bottleneck: The merge step.
A sequential merge gives \(O(n)\) span, limiting parallelism to \(O(\log n)\).
Field-by-field Comparison
Field Before After
Back <ul><li><b>Work</b>: \(O(n \log n)\) same as sequential</li><li><b>Span</b>: \(O(\log^2 n)\) recursive calls parallelized (O(log n) depth) but merge step takes O(log n) span via parallel merge</li><li><b>Parallelism</b>: \(O(n/\log n)\)</li></ul>Bottleneck: the merge step. A sequential merge gives O(n) span, limiting parallelism to O(log n). <ul><li><b>Work</b>: \(O(n \log n)\)&nbsp;- same as sequential</li><li><b>Span</b>: \(O(\log^2 n)\)&nbsp;- recursive calls parallelized (O(log n) depth) but merge step takes O(log n) span via parallel merge</li><li><b>Parallelism</b>: \(O(n/\log n)\)</li></ul>Bottleneck: The merge step. <br>A sequential merge gives&nbsp;\(O(n)\)&nbsp;span, limiting parallelism to&nbsp;\(O(\log n)\).
Tags: ETH::2._Semester::PProg::10._Parallel_Algorithms::3._Parallel_algorithms

Note 8: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Classic
GUID: @G4j5S8izo
modified

Before

Front

ETH::2._Semester::PProg::06._Parallel_Architecture::3._Pipelining
How can an unbalanced pipeline be fixed, and what is the caveat?

Back

ETH::2._Semester::PProg::06._Parallel_Architecture::3._Pipelining
How can an unbalanced pipeline be fixed, and what is the caveat?

Split the bottleneck (slowest) stage into two or more sub-stages of equal duration. Caveat: too much splitting is impractical — it introduces enormous switching/management overhead.

After

Front

ETH::2._Semester::PProg::06._Parallel_Architecture::3._Pipelining
How can an unbalanced pipeline be fixed, and what is the caveat?

Back

ETH::2._Semester::PProg::06._Parallel_Architecture::3._Pipelining
How can an unbalanced pipeline be fixed, and what is the caveat?

Split the bottleneck (slowest) stage into two or more sub-stages of equal duration.

However, too much splitting is impractical - it introduces enormous switching/management overhead.
Field-by-field Comparison
Field Before After
Back Split the bottleneck (slowest) stage into two or more sub-stages of equal duration. Caveat: too much splitting is impractical it introduces enormous switching/management overhead. Split the bottleneck (slowest) stage into two or more sub-stages of equal duration. <br><br>However,&nbsp;too much splitting is impractical - it introduces enormous switching/management overhead.
Tags: ETH::2._Semester::PProg::06._Parallel_Architecture::3._Pipelining

Note 9: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: @r1TbI1^$~
modified

Before

Front

ETH::2._Semester::PProg::10._Parallel_Algorithms::1._Task_graphs_and_performance_measures
The span \(T_\infty\) in a DAG corresponds to the serial fraction f in Amdahl's Law: the longest chain of sequential dependencies that no amount of additional parallelism can overcome.

Back

ETH::2._Semester::PProg::10._Parallel_Algorithms::1._Task_graphs_and_performance_measures
The span \(T_\infty\) in a DAG corresponds to the serial fraction f in Amdahl's Law: the longest chain of sequential dependencies that no amount of additional parallelism can overcome.

Designing parallel algorithms means decreasing span without increasing work too much — directly equivalent to reducing f in Amdahl's Law.

After

Front

ETH::2._Semester::PProg::10._Parallel_Algorithms::1._Task_graphs_and_performance_measures
The span \(T_\infty\) in a DAG corresponds to the serial fraction \(f\) in Amdahl's Law.

Back

ETH::2._Semester::PProg::10._Parallel_Algorithms::1._Task_graphs_and_performance_measures
The span \(T_\infty\) in a DAG corresponds to the serial fraction \(f\) in Amdahl's Law.

(The longest chain of sequential dependencies that no amount of additional parallelism can overcome.)

Designing parallel algorithms means decreasing span without increasing work too much - directly equivalent to reducing \(f\) in Amdahl's Law.
Field-by-field Comparison
Field Before After
Text The span {{c2::\(T_\infty\)}} in a DAG corresponds to {{c1::the serial fraction f in Amdahl's Law: the longest chain of sequential dependencies that no amount of additional parallelism can overcome}}. The span {{c2::\(T_\infty\)}} in a DAG corresponds to {{c1::the serial fraction&nbsp;\(f\)&nbsp;in Amdahl's Law}}.
Extra Designing parallel algorithms means <b>decreasing span</b> without increasing work too much directly equivalent to reducing f in Amdahl's Law. (The longest chain of sequential dependencies that no amount of additional parallelism can overcome.)<br><br>Designing parallel algorithms means <b>decreasing span</b> without increasing work too much - directly equivalent to reducing&nbsp;\(f\)&nbsp;in Amdahl's Law.
Tags: ETH::2._Semester::PProg::10._Parallel_Algorithms::1._Task_graphs_and_performance_measures

Note 10: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: AYPiAlScH$
modified

Before

Front

ETH::2._Semester::PProg::06._Parallel_Architecture::3._Pipelining
Latency of the first element through a pipeline \(= {{c1::\sum_{i} \text{time}(\text{stage}_i)}}\)

Back

ETH::2._Semester::PProg::06._Parallel_Architecture::3._Pipelining
Latency of the first element through a pipeline \(= {{c1::\sum_{i} \text{time}(\text{stage}_i)}}\)

"Latency" by default means the first element. For a balanced pipeline this equals num_stages × max(stage_time).

After

Front

ETH::2._Semester::PProg::06._Parallel_Architecture::3._Pipelining
Latency of the first element through a pipeline \(= {{c1::\sum_{i} \text{time}(\text{stage}_i)}}\)

Back

ETH::2._Semester::PProg::06._Parallel_Architecture::3._Pipelining
Latency of the first element through a pipeline \(= {{c1::\sum_{i} \text{time}(\text{stage}_i)}}\)

"Latency" by default refers to the first element.

For a balanced pipeline this equals num_stages × max(stage_time).
Field-by-field Comparison
Field Before After
Extra "Latency" by default means the first element. For a balanced pipeline this equals num_stages × max(stage_time). "Latency" by default refers to the first element. <br><br>For a balanced pipeline this equals num_stages × max(stage_time).
Tags: ETH::2._Semester::PProg::06._Parallel_Architecture::3._Pipelining

Note 11: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Classic
GUID: AeyQBG3vzh
modified

Before

Front

ETH::2._Semester::PProg::05._Java_Threads
What is the difference between a race condition and a data race?

Back

ETH::2._Semester::PProg::05._Java_Threads
What is the difference between a race condition and a data race?

A data race is a low-level memory event: a shared variable is accessed by two threads, at least one access is a write, and there is no synchronization. A race condition is a higher-level correctness issue: the program's observable behavior depends on execution order. A data race usually causes a race condition, but a race condition can exist even without a data race (e.g. check-then-act with synchronized operations). In Java, data races invoke undefined behavior per the JMM.

After

Front

ETH::2._Semester::PProg::05._Java_Threads
What is the difference between a race condition and a data race?

Back

ETH::2._Semester::PProg::05._Java_Threads
What is the difference between a race condition and a data race?

A race condition is a logic bug - the program's correctness depends on thread timing/ordering.
A data race is a memory access bug - two threads access the same location concurrently with no synchronization, and at least one writes.

They're independent: you can have either without the other.
Field-by-field Comparison
Field Before After
Back A <b>data race</b> is a low-level memory event: a shared variable is accessed by two threads, at least one access is a write, and there is no synchronization. A <b>race condition</b> is a higher-level correctness issue: the program's observable behavior depends on execution order. A data race usually causes a race condition, but a race condition can exist even without a data race (e.g. check-then-act with synchronized operations). In Java, data races invoke undefined behavior per the JMM. <div>A <strong>race condition</strong> is a logic bug - the program's correctness depends on thread timing/ordering.</div> <div>A <strong>data race</strong> is a memory access bug - two threads access the same location concurrently with no synchronization, and at least one writes.</div> <div><br></div><div>They're independent: you can have either without the other.</div>
Tags: ETH::2._Semester::PProg::05._Java_Threads

Note 12: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: E-#>p+?}Ok
modified

Before

Front

ETH::2._Semester::PProg::08._Divide_and_conquer::2._Divide_and_conquer
In Fork/Join, each task should ideally perform between 1,000 and 100,000 sequential operations to ensure the task has enough work to offset scheduling overhead.

Back

ETH::2._Semester::PProg::08._Divide_and_conquer::2._Divide_and_conquer
In Fork/Join, each task should ideally perform between 1,000 and 100,000 sequential operations to ensure the task has enough work to offset scheduling overhead.

Too small → overhead dominates. Too large → poor load balancing and under-utilization. The exact threshold is empirical.

After

Front

ETH::2._Semester::PProg::08._Divide_and_conquer::2._Divide_and_conquer
In Fork/Join, each task should ideally perform between 1,000 and 100,000 sequential operations to ensure the task has enough work to offset scheduling overhead.

Back

ETH::2._Semester::PProg::08._Divide_and_conquer::2._Divide_and_conquer
In Fork/Join, each task should ideally perform between 1,000 and 100,000 sequential operations to ensure the task has enough work to offset scheduling overhead.

Too small → overhead dominates. Too large → poor load balancing and under-utilization. The exact threshold is empirical.
Field-by-field Comparison
Field Before After
Text In Fork/Join, each task should ideally perform between {{c1::1,000 and 100,000}} sequential operations to ensure {{c1::the task has enough work to offset scheduling overhead}}. In Fork/Join, each task should ideally perform between 1,000 and 100,000 sequential operations to ensure {{c1::the task has enough work to offset scheduling overhead}}.
Tags: ETH::2._Semester::PProg::08._Divide_and_conquer::2._Divide_and_conquer

Note 13: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: EdVyAbz-qv
modified

Before

Front

ETH::2._Semester::PProg::12._Shared_Memory_Concurrency
java.util.concurrent.atomic.AtomicInteger provides atomic compound operations such as getAndIncrement() and compareAndSet(expected, update) without requiring synchronized.

Back

ETH::2._Semester::PProg::12._Shared_Memory_Concurrency
java.util.concurrent.atomic.AtomicInteger provides atomic compound operations such as getAndIncrement() and compareAndSet(expected, update) without requiring synchronized.

Implemented via hardware CAS (compare-and-swap). Faster than synchronized for simple counters. Part of java.util.concurrent.atomic.

After

Front

ETH::2._Semester::PProg::12._Shared_Memory_Concurrency
AtomicInteger provides atomic compound operations such as getAndIncrement() and compareAndSet(expected, update) without requiring synchronized.

Back

ETH::2._Semester::PProg::12._Shared_Memory_Concurrency
AtomicInteger provides atomic compound operations such as getAndIncrement() and compareAndSet(expected, update) without requiring synchronized.

Implemented via hardware CAS (compare-and-swap). Faster than synchronized for simple counters. Part of java.util.concurrent.atomic.
Field-by-field Comparison
Field Before After
Text <code>java.util.concurrent.atomic.{{c1::AtomicInteger}}</code> provides {{c2::atomic compound operations}} such as <code>{{c2::getAndIncrement()}}</code> and <code>{{c2::compareAndSet(expected, update)}}</code> without requiring <code>synchronized</code>. <code>{{c1::AtomicInteger}}</code> provides {{c2::atomic compound operations}} such as <code>{{c2::getAndIncrement()}}</code> and <code>{{c2::compareAndSet(expected, update)}}</code> without requiring <code>synchronized</code>.
Tags: ETH::2._Semester::PProg::12._Shared_Memory_Concurrency
↑ Top