Anki Deck Changes

Commit: 044898ad - some new cards in pprog and a&w

Author: lhorva <lhorva@student.ethz.ch>

Date: 2026-03-17T15:44:44+01:00

Changes: 26 note(s) changed (20 added, 6 modified, 0 deleted)

ℹ️ Cosmetic Changes Hidden: 1 note(s) had formatting-only changes and are not shown below

Note 1: ETH::2. Semester::A&W

Deck: ETH::2. Semester::A&W
Note Type: Horvath Cloze
GUID: r%m#I)x.lS
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::A&W::1._Graphentheorie::7._Färbungen
Einen 3-färbbaren Graphen kann man in Zeit \(O(|V| + |E|)\) mit \(O({{c2::\sqrt{|V|} }})\) Farben färben.

Back

ETH::2._Semester::A&W::1._Graphentheorie::7._Färbungen
Einen 3-färbbaren Graphen kann man in Zeit \(O(|V| + |E|)\) mit \(O({{c2::\sqrt{|V|} }})\) Farben färben.

Field-by-field Comparison
Field Before After
Text Einen 3-färbbaren Graphen kann man in Zeit \(O({{c1::|V| + |E|}})\) mit \(O({{c2::\sqrt{|V|} }})\) Farben färben.
Extra <img src="paste-45aa473f33c22168fe4a58fed3ec3d7b0c084bd0.jpg">
Tags: ETH::2._Semester::A&W::1._Graphentheorie::7._Färbungen

Note 2: ETH::2. Semester::A&W

Deck: ETH::2. Semester::A&W
Note Type: Horvath Cloze
GUID: z=Bcbr%mzt
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen
Es ist kein polynomieller Algorithmus bekannt, um zu entscheiden, ob zwei Graphen isomorph sind.

Back

ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen
Es ist kein polynomieller Algorithmus bekannt, um zu entscheiden, ob zwei Graphen isomorph sind.
Field-by-field Comparison
Field Before After
Text Es ist kein polynomieller Algorithmus bekannt, um zu entscheiden, ob zwei Graphen {{c1::isomorph}} sind.
Tags: ETH::2._Semester::A&W::2._Wahrscheinlichkeitstheorie_und_randomisierte_Algorithmen

Note 3: ETH::2. Semester::Analysis

Deck: ETH::2. Semester::Analysis
Note Type: Horvath Cloze
GUID: fr%yq&f{dL
modified

Before

Front

ETH::2._Semester::Analysis::2._Folgen::2._Konvergenz::1._Definitionen
Falls für eine Folge gilt \[{{c1:: \forall M > 0 \ \exists N > 0 \text{ such that } \forall n > N \ : \ a_n > M }}\] sagen wir, dass die Folge gegen unendlich divergiert und schreiben \(\lim_{n \rightarrow \infty} a_n = \infty\).

Back

ETH::2._Semester::Analysis::2._Folgen::2._Konvergenz::1._Definitionen
Falls für eine Folge gilt \[{{c1:: \forall M > 0 \ \exists N > 0 \text{ such that } \forall n > N \ : \ a_n > M }}\] sagen wir, dass die Folge gegen unendlich divergiert und schreiben \(\lim_{n \rightarrow \infty} a_n = \infty\).

Genauso kann die Folge auch gegen *minus unendlich* divergieren

After

Front

ETH::2._Semester::Analysis::2._Folgen::2._Konvergenz::1._Definitionen
Falls für eine Folge gilt \[{{c1:: \forall M > 0 \ \exists N > 0 \text{ such that } \forall n > N \ : \ a_n > M }}\] sagen wir, dass die Folge gegen unendlich divergiert und schreiben \(\lim_{n \rightarrow \infty} a_n = \infty\).

Back

ETH::2._Semester::Analysis::2._Folgen::2._Konvergenz::1._Definitionen
Falls für eine Folge gilt \[{{c1:: \forall M > 0 \ \exists N > 0 \text{ such that } \forall n > N \ : \ a_n > M }}\] sagen wir, dass die Folge gegen unendlich divergiert und schreiben \(\lim_{n \rightarrow \infty} a_n = \infty\).

Genauso kann die Folge auch gegen \(-\infty\) divergieren.
Field-by-field Comparison
Field Before After
Extra Genauso kann die Folge auch gegen *minus unendlich* divergieren Genauso kann die Folge auch gegen&nbsp;\(-\infty\)&nbsp;divergieren.
Tags: ETH::2._Semester::Analysis::2._Folgen::2._Konvergenz::1._Definitionen

Note 4: ETH::2. Semester::Analysis

Deck: ETH::2. Semester::Analysis
Note Type: Horvath Cloze
GUID: f{:q{$_BIe
modified

Before

Front

ETH::2._Semester::Analysis::2._Folgen::3._Häufungspunkte
\(A \in \mathbb{R}\) ist ein Häufungspunkt einer Folge genau dann, wenn {{c1::eine konvergente Teilfolge \(({a_n}_k)_{k \in \mathbb{N_0}}\) existiert mit \[ \lim_{k \rightarrow \infty} {a_n}_k = A \]}}

Back

ETH::2._Semester::Analysis::2._Folgen::3._Häufungspunkte
\(A \in \mathbb{R}\) ist ein Häufungspunkt einer Folge genau dann, wenn {{c1::eine konvergente Teilfolge \(({a_n}_k)_{k \in \mathbb{N_0}}\) existiert mit \[ \lim_{k \rightarrow \infty} {a_n}_k = A \]}}

After

Front

ETH::2._Semester::Analysis::2._Folgen::3._Häufungspunkte
\(A \in \mathbb{R}\) ist ein Häufungspunkt einer Folge genau dann, wenn {{c1::eine konvergente Teilfolge \(({a_n}_k)_{k \in \mathbb{N_0} }\) existiert mit \[ \lim_{k \rightarrow \infty} {a_n}_k = A \]}}

Back

ETH::2._Semester::Analysis::2._Folgen::3._Häufungspunkte
\(A \in \mathbb{R}\) ist ein Häufungspunkt einer Folge genau dann, wenn {{c1::eine konvergente Teilfolge \(({a_n}_k)_{k \in \mathbb{N_0} }\) existiert mit \[ \lim_{k \rightarrow \infty} {a_n}_k = A \]}}
Field-by-field Comparison
Field Before After
Text \(A \in \mathbb{R}\) ist ein Häufungspunkt einer Folge genau dann, wenn {{c1::eine konvergente Teilfolge&nbsp;\(({a_n}_k)_{k \in \mathbb{N_0}}\) existiert mit \[ \lim_{k \rightarrow \infty} {a_n}_k = A \]}} \(A \in \mathbb{R}\) ist ein Häufungspunkt einer Folge genau dann, wenn {{c1::eine konvergente Teilfolge&nbsp;\(({a_n}_k)_{k \in \mathbb{N_0} }\) existiert mit \[ \lim_{k \rightarrow \infty} {a_n}_k = A \]}}
Tags: ETH::2._Semester::Analysis::2._Folgen::3._Häufungspunkte

Note 5: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: CXJAZafgMw
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Gustafson's law:

\(W = {{c1::P(1-f)T_{wall} + fT_{wall} }}\)

Back

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Gustafson's law:

\(W = {{c1::P(1-f)T_{wall} + fT_{wall} }}\)

Field-by-field Comparison
Field Before After
Text Gustafson's law:<br><br>\(W = {{c1::P(1-f)T_{wall} + fT_{wall} }}\)
Extra <img src="paste-aa6938364b9d7ad1e183831ea200b9a4b751156d.jpg">
Tags: ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance

Note 6: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Classic
GUID: E%^haXk=N
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Parallel program:
  • Sequential part: 20%
  • Parallel part: 80% (assume it scales linearly)
  • \(T_1 = 10\)

What is \(T_8\)? What is the speedup \(S_8\)?

Back

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Parallel program:
  • Sequential part: 20%
  • Parallel part: 80% (assume it scales linearly)
  • \(T_1 = 10\)

What is \(T_8\)? What is the speedup \(S_8\)?

\(T_8 = 0.2 \cdot 10 + \frac{0.8 \cdot 10}{8} = 2 + 1 = 3\)
\(S_8 = T_1/T_8 = 10/3 = 3.33\)
Field-by-field Comparison
Field Before After
Front Parallel program:<br><ul><li>Sequential part: 20%</li><li>Parallel part: 80% (assume it scales linearly)</li><li>\(T_1 = 10\)<br></li></ul><div><img src="paste-9bf43ab2ea793b111975a884c33e99efb31c285a.jpg"><br></div>What is \(T_8\)? What is the speedup \(S_8\)?<br>
Back \(T_8 = 0.2 \cdot 10 + \frac{0.8 \cdot 10}{8} = 2 + 1 = 3\)<br> \(S_8 = T_1/T_8 = 10/3 = 3.33\)
Tags: ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance

Note 7: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: GzZXD@`eLY
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
"...the effort expended on achieving high parallel processing rates is wasted unless it is accompanied by achievements in sequential processing rates of very nearly the same magnitude."

— Gene Amdahl

Back

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
"...the effort expended on achieving high parallel processing rates is wasted unless it is accompanied by achievements in sequential processing rates of very nearly the same magnitude."

— Gene Amdahl
Field-by-field Comparison
Field Before After
Text <i>"...the effort expended on achieving high parallel processing rates is wasted unless {{c1::it is accompanied by achievements in sequential processing rates of very nearly the same magnitude}}."<br></i><br>—&nbsp;Gene Amdahl
Tags: ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance

Note 8: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: LGNN%IJg[[
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

Back

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

How much time is spent on synchronization, locking, context switching?

Frequent context switches introduce delays that degrade parallel performance.

High contention for shared resources or excessive synchronization barriers create bottlenecks that limit parallel efficiency.

Field-by-field Comparison
Field Before After
Text What factors limit scalability?<br><ol><li>Sequential part of the program (Amdahl's law)</li><li>Data structures and algorithms</li><li>Work distribution strategy</li><li>Work scheduling strategy</li><li>{{c1::Overhead and synchronization barriers}}</li><li>Memory access and caches</li></ol>
Extra How much time is spent on synchronization, locking, context switching?<br><br>Frequent context switches introduce delays that degrade parallel performance.<br><br>High contention for shared resources or excessive synchronization barriers create bottlenecks that limit parallel efficiency.<br><br><img src="paste-52659711f8baf2991c46e3e12c42daca04c04ea3.jpg">
Tags: ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?

Note 9: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: MWCC+]Z]qd
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Amdahl's law:

\(S_p \leq {{c1::\dfrac{W_{ser} + W_{par} }{W_{ser} + \dfrac{W_{par} }{P} } }}\)

Back

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Amdahl's law:

\(S_p \leq {{c1::\dfrac{W_{ser} + W_{par} }{W_{ser} + \dfrac{W_{par} }{P} } }}\)
Field-by-field Comparison
Field Before After
Text Amdahl's law:<br><br>\(S_p \leq {{c1::\dfrac{W_{ser} + W_{par} }{W_{ser} + \dfrac{W_{par} }{P} } }}\)
Tags: ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance

Note 10: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: P[r-`=cABI
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::09._Divide_and_conquer::3._Executor_Service
The exeuctor service is not suitable for recursive problems where deep structures require waiting for partial results.

Back

ETH::2._Semester::PProg::09._Divide_and_conquer::3._Executor_Service
The exeuctor service is not suitable for recursive problems where deep structures require waiting for partial results.
Field-by-field Comparison
Field Before After
Text The exeuctor service is not suitable for {{c1::recursive problems where deep structures require waiting for partial results}}.
Tags: ETH::2._Semester::PProg::09._Divide_and_conquer::3._Executor_Service

Note 11: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: b&FE48/HWB
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
If \(f\) is the non-parallelizable serial fraction of the total work, this gives:

\(S_p \leq {{c1::\dfrac{1}{f + \dfrac{1-f}{P} } }}\)

Back

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
If \(f\) is the non-parallelizable serial fraction of the total work, this gives:

\(S_p \leq {{c1::\dfrac{1}{f + \dfrac{1-f}{P} } }}\)

Note that the following equalities hold:

\(W_{ser} = fT_1\)
\(W_{par} = (1-f)T_1\)
Field-by-field Comparison
Field Before After
Text If \(f\) is the non-parallelizable serial fraction of the total work, this&nbsp;gives:<br><br>\(S_p \leq {{c1::\dfrac{1}{f + \dfrac{1-f}{P} } }}\)
Extra Note that the following equalities hold:<br><br>\(W_{ser} = fT_1\)<br>\(W_{par} = (1-f)T_1\)
Tags: ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance

Note 12: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: c*rU+
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

Back

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

Goal: full utilization (no processor is idle)

How should work units be assigned to threads?

Efficiency of user-level scheduling: How quickly can tasks be assigned and reassigned to threads?

Overhead from task creation and
management can offset gains.
Field-by-field Comparison
Field Before After
Text What factors limit scalability?<br><ol><li>Sequential part of the program (Amdahl's law)</li><li>Data structures and algorithms</li><li>Work distribution strategy</li><li>{{c1::Work scheduling strategy}}</li><li>Overhead and synchronization barriers</li><li>Memory access and caches</li></ol>
Extra <img src="paste-a6a195f001fa90004c2fb713683c875872b67fbb.jpg" style="float: right;">Goal: <b>full utilization</b> (no processor is idle)<br><br>How should work units be assigned to threads?<br><br>Efficiency of user-level scheduling: How quickly can tasks be assigned and reassigned to threads?<br><br>Overhead from task creation and<br>management can offset gains.<br>
Tags: ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?

Note 13: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: d#q[!,Jl;E
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

Back

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

Can tasks be executed independently or do they rely on one another?
Even with ideal parallelization, some problems have inherent dependencies that limit speedup.

Even if 99% of the program is parallelized, the remaining 1% sequential part dominates at high core counts.
Field-by-field Comparison
Field Before After
Text What factors limit scalability?<br><ol><li>{{c1::Sequential part of the program (Amdahl's law)}}</li><li>Data structures and algorithms</li><li>Work distribution strategy</li><li>Work scheduling strategy</li><li>Overhead and synchronization barriers</li><li>Memory access and caches</li></ol>
Extra Can tasks be executed independently or do they rely on one another?<br>Even with ideal parallelization, some problems have inherent dependencies that limit speedup.<br><br><b>Even if 99% of the program is parallelized, the remaining 1% sequential part dominates at high core counts.</b>
Tags: ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?

Note 14: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: dCz!O8dc.S
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

Back

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

Some data structures require sequential access.

Some algorithms naturally expose more parallelism than others.
Algorithms can be restructured to improve parallelism.

Parallel efficiency starts with the right data structures and algorithms.
Field-by-field Comparison
Field Before After
Text What factors limit scalability?<br><ol><li>Sequential part of the program (Amdahl's law)</li><li>{{c1::Data structures and algorithms}}</li><li>Work distribution strategy</li><li>Work scheduling strategy</li><li>Overhead and synchronization barriers</li><li>Memory access and caches</li></ol>
Extra Some data structures require sequential access.<br><img src="paste-a5b966c62876b7312f9a71df81762f263917add0.jpg"><br>Some algorithms naturally expose more parallelism than others.<br>Algorithms can be restructured to improve parallelism.<br><br><b>Parallel efficiency starts with the right data structures and algorithms.</b>
Tags: ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?

Note 15: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: fb-e^m:3OM
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::09._Divide_and_conquer::4._Java's_ForkJoin_framework
The Fork/Join Framework is well-suited for recursive problems where deep structures require waiting for partial results.

Back

ETH::2._Semester::PProg::09._Divide_and_conquer::4._Java's_ForkJoin_framework
The Fork/Join Framework is well-suited for recursive problems where deep structures require waiting for partial results.
Field-by-field Comparison
Field Before After
Text The Fork/Join Framework is well-suited for {{c1::recursive problems where deep structures require waiting for partial results}}.
Tags: ETH::2._Semester::PProg::09._Divide_and_conquer::4._Java's_ForkJoin_framework

Note 16: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: g
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::09._Divide_and_conquer::3._Executor_Service
The executor service is well-suited for flat structures or tasks that can run independently in parallel (e.g., transactions).

Back

ETH::2._Semester::PProg::09._Divide_and_conquer::3._Executor_Service
The executor service is well-suited for flat structures or tasks that can run independently in parallel (e.g., transactions).
Field-by-field Comparison
Field Before After
Text The executor service is well-suited for {{c1::flat structures or tasks that can run independently in parallel (e.g., transactions)}}.
Tags: ETH::2._Semester::PProg::09._Divide_and_conquer::3._Executor_Service

Note 17: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: jxx{VYsKxH
modified

Before

Front

ETH::2._Semester::PProg::Terminology
Divide and conquer style parallelism (also called recursive splitting) means: solve a problem by recursively solving smaller sub-problems and combining their results. Solve the sub-problems in separate threads to gain a speedup.

Back

ETH::2._Semester::PProg::Terminology
Divide and conquer style parallelism (also called recursive splitting) means: solve a problem by recursively solving smaller sub-problems and combining their results. Solve the sub-problems in separate threads to gain a speedup.

After

Front

ETH::2._Semester::PProg::08._Divide_and_conquer::2._Divide_and_conquer ETH::2._Semester::PProg::Terminology
Divide and conquer style parallelism (also called recursive splitting) means: solve a problem by recursively solving smaller sub-problems and combining their results

Back

ETH::2._Semester::PProg::08._Divide_and_conquer::2._Divide_and_conquer ETH::2._Semester::PProg::Terminology
Divide and conquer style parallelism (also called recursive splitting) means: solve a problem by recursively solving smaller sub-problems and combining their results

Solve the sub-problems in separate threads to gain a speedup.
Field-by-field Comparison
Field Before After
Text {{c1::Divide and conquer style parallelism}} (also called {{c2::recursive splitting}}) means: solve a problem by {{c3::recursively solving smaller sub-problems and combining their results}}. Solve the sub-problems in {{c4::separate threads}} to gain a speedup. {{c1::Divide and conquer style parallelism (also called recursive splitting)}} means: solve a problem by {{c2::recursively solving smaller sub-problems and combining their results}}.&nbsp;
Extra Solve the sub-problems in separate threads to gain a speedup.
Tags: ETH::2._Semester::PProg::Terminology ETH::2._Semester::PProg::08._Divide_and_conquer::2._Divide_and_conquer

Note 18: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: qW*_B8<}hp
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Gustafson's law:

\(S_p = f + P(1-f) = P - f(P-1)\)

Back

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Gustafson's law:

\(S_p = f + P(1-f) = P - f(P-1)\)

Field-by-field Comparison
Field Before After
Text Gustafson's law:<br><br>\(S_p = {{c1::f + P(1-f) = P - f(P-1)}}\)
Extra <img src="paste-aa6938364b9d7ad1e183831ea200b9a4b751156d.jpg">
Tags: ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance

Note 19: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: quC;@cmWD7
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

Back

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

How evenly can work be distributed among cores?

Some workloads create imbalances, causing some cores to remain idle while others are overloaded.
Field-by-field Comparison
Field Before After
Text What factors limit scalability?<br><ol><li>Sequential part of the program (Amdahl's law)</li><li>Data structures and algorithms</li><li>{{c1::Work distribution strategy}}</li><li>Work scheduling strategy</li><li>Overhead and synchronization barriers</li><li>Memory access and caches</li></ol>
Extra How evenly can work be distributed among cores?<br><br>Some workloads create imbalances, causing some cores to remain idle while others are overloaded.<br><img src="paste-e2881da9332c1b196a69f6460b49e9de2d515c20.jpg">
Tags: ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?

Note 20: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: u54P4-YG?G
modified

Before

Front

ETH::2._Semester::PProg::Terminology
The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.

Back

ETH::2._Semester::PProg::Terminology
The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.

After

Front

ETH::2._Semester::PProg::09._Divide_and_conquer::4._Java's_ForkJoin_framework ETH::2._Semester::PProg::Terminology
The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.

Back

ETH::2._Semester::PProg::09._Divide_and_conquer::4._Java's_ForkJoin_framework ETH::2._Semester::PProg::Terminology
The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.
Field-by-field Comparison
Field Before After
Text The ForkJoin framework automatically assigns tasks to Java threads and may execute {{c1::multiple tasks in one thread}} to avoid {{c2::thread context switching overhead}}. The ForkJoin framework automatically {{c1::assigns tasks to Java threads and may execute multiple tasks in one thread}} to avoid {{c2::thread context switching overhead}}.
Tags: ETH::2._Semester::PProg::Terminology ETH::2._Semester::PProg::09._Divide_and_conquer::4._Java's_ForkJoin_framework

Note 21: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: ug9GHCY%(J
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

Back

ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?
What factors limit scalability?
  1. Sequential part of the program (Amdahl's law)
  2. Data structures and algorithms
  3. Work distribution strategy
  4. Work scheduling strategy
  5. Overhead and synchronization barriers
  6. Memory access and caches

Memory access contention: If all cores access shared memory, memory bandwidth becomes a bottleneck.

More cores may lead to increased cache misses if data is not well-partitioned.

Field-by-field Comparison
Field Before After
Text What factors limit scalability?<br><ol><li>Sequential part of the program (Amdahl's law)</li><li>Data structures and algorithms</li><li>Work distribution strategy</li><li>Work scheduling strategy</li><li>Overhead and synchronization barriers</li><li>{{c1::Memory access and caches}}</li></ol>
Extra Memory access contention: If all cores access shared memory, memory bandwidth becomes a bottleneck.<br><br>More cores may lead to increased cache misses if data is not well-partitioned.<br><br><img src="paste-286d4c806c5f57d89365c6d6c0f7e9ceff7485f6.jpg">
Tags: ETH::2._Semester::PProg::07._Concepts::1._What_factors_limit_scalability?

Note 22: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: unYLoX/LFH
modified

Before

Front

ETH::2._Semester::PProg::Terminology
The ForkJoin framework embraces divide and conquer parallelism. Tasks can be spawned (forked) and joined by the framework. 

Back

ETH::2._Semester::PProg::Terminology
The ForkJoin framework embraces divide and conquer parallelism. Tasks can be spawned (forked) and joined by the framework. 

After

Front

ETH::2._Semester::PProg::09._Divide_and_conquer::4._Java's_ForkJoin_framework ETH::2._Semester::PProg::Terminology
The ForkJoin framework embraces divide and conquer parallelism

Back

ETH::2._Semester::PProg::09._Divide_and_conquer::4._Java's_ForkJoin_framework ETH::2._Semester::PProg::Terminology
The ForkJoin framework embraces divide and conquer parallelism

Tasks can be spawned (forked) and joined by the framework. 
Field-by-field Comparison
Field Before After
Text The ForkJoin framework embraces {{c1::divide and conquer parallelism}}. Tasks can be {{c2::spawned (forked) and joined}} by the framework.&nbsp; The ForkJoin framework embraces {{c1::divide and conquer parallelism}}.&nbsp;
Extra Tasks can be spawned (forked) and joined by the framework.&nbsp;
Tags: ETH::2._Semester::PProg::Terminology ETH::2._Semester::PProg::09._Divide_and_conquer::4._Java's_ForkJoin_framework

Note 23: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: upj89j/}W/
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
(Parallel) speedup:

\(S_p=T_1/T_p\)

Back

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
(Parallel) speedup:

\(S_p=T_1/T_p\)

  • \(S_p = p\) → linear speedup (perfection)
  • \(S_p < p\) → sub-linear speedup (performance loss)
  • \(S_p > p\) → super-linear speedup (sorcery!)
Efficiency: \(S_p / p\)
Field-by-field Comparison
Field Before After
Text (Parallel) speedup:<br><br>\(S_p={{c1::T_1/T_p}}\)
Extra <div><ul><li>\(S_p = p\) → linear speedup (perfection)</li><li>\(S_p &lt; p\) → sub-linear speedup (performance loss)</li><li>\(S_p &gt; p\) → super-linear speedup (sorcery!)</li></ul><b>Efficiency: </b>\(S_p / p\)<br></div>
Tags: ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance

Note 24: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: w6B3a^|y`X
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Bound on speedup per Amdahl's law:
\(T_p \geq {{c1::W_{ser} + \dfrac{W_{par} }{P} }}\)

Back

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Bound on speedup per Amdahl's law:
\(T_p \geq {{c1::W_{ser} + \dfrac{W_{par} }{P} }}\)

\(W\) denotes serial/parallel work respectively
\(P \) denotes the number of workers available
Field-by-field Comparison
Field Before After
Text Bound on speedup per Amdahl's law:<br><pre><code>\(T_p \geq {{c1::W_{ser} + \dfrac{W_{par} }{P} }}\) </code></pre>
Extra \(W\)&nbsp;denotes serial/parallel work respectively<br>\(P \)&nbsp;denotes the number of workers available
Tags: ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance

Note 25: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: zRL[#VyH!Q
added

Previous

Note did not exist

New Note

Front

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Parallel execution time:

\(T_p=T_1/p\)

Back

ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
Parallel execution time:

\(T_p=T_1/p\)

(Equality, doesn't necessarily hold, usually we have \(>\) due to some performance loss.)

\(T_1\) denotes sequential execution time.
Field-by-field Comparison
Field Before After
Text Parallel execution time:<br><br>\(T_p={{c1::T_1/p}}\)
Extra (Equality, doesn't necessarily hold, usually we have&nbsp;\(&gt;\)&nbsp;due to some performance loss.)<br><br>\(T_1\)&nbsp;denotes sequential execution time.
Tags: ETH::2._Semester::PProg::07._Concepts::2._Parallel_performance
↑ Top