A functional unit is a component of a CPU (or core) that performs a certain task, e.g. executing integer arithmetic operations. An execution unit is one such a functional unit.
Note 1: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
DR#[(B?d4d
Before
Front
Back
A functional unit is a component of a CPU (or core) that performs a certain task, e.g. executing integer arithmetic operations. An execution unit is one such a functional unit.
After
Front
A functional unit is a component of a CPU (or core) that performs a certain task, e.g. executing integer arithmetic operations. An execution unit is one such example.
Back
A functional unit is a component of a CPU (or core) that performs a certain task, e.g. executing integer arithmetic operations. An execution unit is one such example.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | A {{c1::functional unit}} is a component of a CPU (or core) that {{c2::performs a certain task}}, e.g. executing {{c3::integer arithmetic operations}}. An {{c4::execution unit}} is one such |
A {{c1::functional unit}} is a component of a CPU (or core) that {{c2::performs a certain task}}, e.g. executing {{c3::integer arithmetic operations}}. An {{c4::execution unit}} is one such example. |
Note 2: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
DiKJq81cx0
Before
Front
A program has a race condition if, during any possible execution with the same inputs, its observable behaviour (results, output, ...) may change if events happen in different order. Events here are typically scheduler interactions causing different interleavings, but could also be, e.g. changing network latency. Race condition is often used interchangeably with data race.
Back
A program has a race condition if, during any possible execution with the same inputs, its observable behaviour (results, output, ...) may change if events happen in different order. Events here are typically scheduler interactions causing different interleavings, but could also be, e.g. changing network latency. Race condition is often used interchangeably with data race.
After
Front
A program has a race condition if, during any possible execution with the same inputs, its observable behaviour (results, output, ...) may change if events happen in different order. Events here are typically scheduler interactions causing different interleavings, but could also be, e.g. changing network latency. Race condition is often used interchangeably with data race.
Back
A program has a race condition if, during any possible execution with the same inputs, its observable behaviour (results, output, ...) may change if events happen in different order. Events here are typically scheduler interactions causing different interleavings, but could also be, e.g. changing network latency. Race condition is often used interchangeably with data race.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | A program has a {{c1::race condition}} if, during any possible execution with the same inputs, its {{c2::observable behaviour (results, output, ...)}} may change if {{c3::events happen in different order}}. Events here are typically {{c4::scheduler interactions causing different interleavings}}, but could also be, e.g. changing network latency. Race condition is often used interchangeably with {{c5::data race}}. | A program has a {{c1::race condition}} if, during any possible execution with the same inputs, its {{c2::observable behaviour (results, output, ...)}} may change if {{c3::events happen in different order}}. Events here are typically {{c4::scheduler interactions causing different interleavings}}, but could also be, e.g. changing network latency. {{c1::Race condition}} is often used interchangeably with {{c5::data race}}. |
Note 3: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
FKu}=}:F8|
Before
Front
Given multiple threads, each executing a sequence of instructions, an interleaving is a sequence of instructions obtained from merging the individual sequences. A sequentially consistent interleaving is one where the relative order of statements from one thread is preserved.
Back
Given multiple threads, each executing a sequence of instructions, an interleaving is a sequence of instructions obtained from merging the individual sequences. A sequentially consistent interleaving is one where the relative order of statements from one thread is preserved.
After
Front
Given multiple threads, each executing a sequence of instructions, an interleaving is a sequence of instructions obtained from merging the individual sequences. A sequentially consistent interleaving is one where the relative order of statements from one thread is preserved.
Back
Given multiple threads, each executing a sequence of instructions, an interleaving is a sequence of instructions obtained from merging the individual sequences. A sequentially consistent interleaving is one where the relative order of statements from one thread is preserved.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | Given multiple threads, each executing a sequence of instructions, an {{c1::interleaving}} is {{c2::a sequence of instructions obtained from merging the individual sequences}}. A {{c3::sequentially consistent}} interleaving |
Given multiple threads, each executing a sequence of instructions, an {{c1::interleaving}} is {{c2::a sequence of instructions obtained from merging the individual sequences}}. A {{c3::sequentially consistent}} {{c1::interleaving}} is one where {{c4::the relative order of statements from one thread is preserved}}. |
Note 4: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
bD]7})PU$-
Before
Front
Parallelism means doing multiple things at the same time (as opposed to concurrency: dealing with multiple things at the same time). Performing computations simultaneously; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. Parallelism can be specified explicitly by manually assigning tasks to threads or implicitly by using a framework that distributes tasks automatically.
Back
Parallelism means doing multiple things at the same time (as opposed to concurrency: dealing with multiple things at the same time). Performing computations simultaneously; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. Parallelism can be specified explicitly by manually assigning tasks to threads or implicitly by using a framework that distributes tasks automatically.
After
Front
Parallelism means doing multiple things at the same time (as opposed to concurrency: dealing with multiple things at the same time). Performing computations simultaneously; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. Parallelism can be specified explicitly by manually assigning tasks to threads or implicitly by using a framework that distributes tasks automatically.
Back
Parallelism means doing multiple things at the same time (as opposed to concurrency: dealing with multiple things at the same time). Performing computations simultaneously; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. Parallelism can be specified explicitly by manually assigning tasks to threads or implicitly by using a framework that distributes tasks automatically.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | {{c1::Parallelism}} means {{c2::doing multiple things at the same time}} (as opposed to concurrency: dealing with multiple things at the same time). Performing computations {{c3::simultaneously}}; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. Parallelism |
{{c1::Parallelism}} means {{c2::doing multiple things at the same time}} (as opposed to concurrency: dealing with multiple things at the same time). Performing computations {{c3::simultaneously}}; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. {{c1::Parallelism}} can be specified explicitly by {{c4::manually assigning tasks to threads}} or implicitly by {{c5::using a framework that distributes tasks automatically}}. |
Note 5: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
fcsPFI,=e0
Before
Front
Locality has several meanings in parallel programming: 1. Locally reason about one thread at a time (thread modularity) - simplifies correctness arguments. 2. Data locality: related memory locations are accessed shortly after each other - improves cache usage. 3. Code locality: straight-line code increases opportunities for instruction level parallelism.
Back
Locality has several meanings in parallel programming: 1. Locally reason about one thread at a time (thread modularity) - simplifies correctness arguments. 2. Data locality: related memory locations are accessed shortly after each other - improves cache usage. 3. Code locality: straight-line code increases opportunities for instruction level parallelism.
After
Front
Locality has several meanings in parallel programming:
- Locally reason about one thread at a time (thread modularity) - simplifies correctness arguments.
- Data locality: related memory locations are accessed shortly after each other - improves cache usage
- Code locality: straight-line code increases opportunities for instruction level parallelism.
Back
Locality has several meanings in parallel programming:
- Locally reason about one thread at a time (thread modularity) - simplifies correctness arguments.
- Data locality: related memory locations are accessed shortly after each other - improves cache usage
- Code locality: straight-line code increases opportunities for instruction level parallelism.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | {{c1::Locality}} has several meanings in parallel programming: |
{{c1::Locality}} has several meanings in parallel programming: <br><br><ol><li>{{c2::Locally reason about one thread at a time}} (thread modularity) - simplifies correctness arguments.</li><li>{{c3::Data locality}}: related memory locations are accessed shortly after each other - improves cache usage</li><li>{{c4::Code locality}}: straight-line code increases opportunities for instruction level parallelism.</li></ol><br> |
Note 6: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
fn|<.%5[Cr
Before
Front
A liveness property is a property of a system: "something good eventually happens". Can only be violated in infinite time. Infinite loops and starvation are typical liveness properties.
Back
A liveness property is a property of a system: "something good eventually happens". Can only be violated in infinite time. Infinite loops and starvation are typical liveness properties.
After
Front
A liveness property is a property of a system: "something good eventually happens". Can only be violated in infinite time. Infinite loops and starvation are typical liveness properties.
Back
A liveness property is a property of a system: "something good eventually happens". Can only be violated in infinite time. Infinite loops and starvation are typical liveness properties.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | A {{c1::liveness property}} is a property of a system: {{c2::"something good eventually happens"}}. Can only be violated in {{c3::infinite time}}. {{c4::Infinite loops and starvation}} are typical liveness properties. | A {{c1::liveness property}} is a property of a system: {{c2::"something good eventually happens"}}. Can only be violated in {{c3::infinite time}}. {{c4::Infinite loops and starvation}} are typical {{c1:: liveness properties}}. |
Note 7: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
r:&*y8!jY9
Before
Front
T_1 (sequential execution time) is the time that is required to perform some work on a single processor.
Back
T_1 (sequential execution time) is the time that is required to perform some work on a single processor.
After
Front
\(T_1\) (sequential execution time) is the time that is required to perform some work on a single processor.
Back
\(T_1\) (sequential execution time) is the time that is required to perform some work on a single processor.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | {{c1::T_1}} ({{c2::sequential execution time}}) is the time that is required to perform some work on a {{c3::single processor}}. | {{c1::\(T_1\)}} ({{c2::sequential execution time}}) is the time that is required to perform some work on a {{c3::single processor}}. |
Note 8: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
r@,<:U1c3(
Before
Front
Efficiency expresses how much of the available CPU performance can be used. Heavily limited by the sequential part of a program. Efficiency = S_p/p.
Back
Efficiency expresses how much of the available CPU performance can be used. Heavily limited by the sequential part of a program. Efficiency = S_p/p.
After
Front
Efficiency expresses how much of the available CPU performance can be used. Heavily limited by the sequential part of a program. Efficiency = {{c4::\(\frac{S_p} p\)}}.
Back
Efficiency expresses how much of the available CPU performance can be used. Heavily limited by the sequential part of a program. Efficiency = {{c4::\(\frac{S_p} p\)}}.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | {{c1::Efficiency}} expresses {{c2::how much of the available CPU performance can be used}}. Heavily limited by {{c3::the sequential part of a program}}. Efficiency |
{{c1::Efficiency}} expresses {{c2::how much of the available CPU performance can be used}}. Heavily limited by {{c3::the sequential part of a program}}. {{c1::Efficiency}} = {{c4::\(\frac{S_p} p\)}}. |
Note 9: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
s.@YR(&3S3
Before
Front
The maximum possible speedup (parallelism) is T_1/T_∞. Here T_p is the time required to perform work on p processors, while T_∞ is the time required with infinite processors (only sequential part matters). T_1 is the sequential execution time.
Back
The maximum possible speedup (parallelism) is T_1/T_∞. Here T_p is the time required to perform work on p processors, while T_∞ is the time required with infinite processors (only sequential part matters). T_1 is the sequential execution time.
After
Front
The maximum possible speedup (parallelism) is {{c2::\(\frac{T_1}{T_\infty} \)}}. Here \(T_p\) is the time required to perform work on p processors, while \(T_\infty\) is the time required with infinite processors (only sequential part matters). \(T_1\) is the sequential execution time.
Back
The maximum possible speedup (parallelism) is {{c2::\(\frac{T_1}{T_\infty} \)}}. Here \(T_p\) is the time required to perform work on p processors, while \(T_\infty\) is the time required with infinite processors (only sequential part matters). \(T_1\) is the sequential execution time.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | The maximum possible speedup ({{c1::parallelism}}) is {{c2:: |
The maximum possible speedup ({{c1::parallelism}}) is {{c2::\(\frac{T_1}{T_\infty} \)}}. Here {{c3::\(T_p\)}} is the time required to perform work on {{c4::p processors}}, while {{c5::\(T_\infty\)}} is the time required with {{c6::infinite processors}} (only sequential part matters). {{c7::\(T_1\)}} is the {{c8::sequential execution time}}. |
Note 10: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
t,$m]=w>7t
Before
Front
Process context is all state associated with a process, including CPU state (registers, program counter), program state (stack, heap, resource handles), and additional management information. A thread also has a context, but it is typically much smaller.
Back
Process context is all state associated with a process, including CPU state (registers, program counter), program state (stack, heap, resource handles), and additional management information. A thread also has a context, but it is typically much smaller.
After
Front
Process context is all state associated with a process, including CPU state (registers, program counter), program state (stack, heap, resource handles), and additional management information.
Back
Process context is all state associated with a process, including CPU state (registers, program counter), program state (stack, heap, resource handles), and additional management information.
A thread also has a context, but it is typically much smaller.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | {{c1::Process context}} is all state associated with a process, including {{c2::CPU state (registers, program counter)}}, {{c3::program state (stack, heap, resource handles)}}, and {{c4::additional management information}}. |
{{c1::Process context}} is all state associated with a process, including {{c2::CPU state (registers, program counter)}}, {{c3::program state (stack, heap, resource handles)}}, and {{c4::additional management information}}. |
| Extra | A thread also has a context, but it is typically much smaller. |
Note 11: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
unYLoX/LFH
Before
Front
The ForkJoin framework embraces divide and conquer parallelism. Tasks can be spawned (forked) and joined by the framework. The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.
Back
The ForkJoin framework embraces divide and conquer parallelism. Tasks can be spawned (forked) and joined by the framework. The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.
After
Front
The ForkJoin framework embraces divide and conquer parallelism. Tasks can be spawned (forked) and joined by the framework. The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.
Back
The ForkJoin framework embraces divide and conquer parallelism. Tasks can be spawned (forked) and joined by the framework. The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | The {{c1::ForkJoin framework}} embraces {{c2::divide and conquer parallelism}}. Tasks can be {{c3::spawned (forked) and joined}} by the framework. The ForkJoin framework |
The {{c1::ForkJoin framework}} embraces {{c2::divide and conquer parallelism}}. Tasks can be {{c3::spawned (forked) and joined}} by the framework. The {{c1::ForkJoin framework}} automatically assigns tasks to Java threads and may execute {{c4::multiple tasks in one thread}} to avoid {{c5::thread context switching overhead}}. |
Note 12: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
y$G_&;3^og
Before
Front
A lock is reentrant if it can be acquired (and released) multiple times by the same thread. If a lock is non-reentrant, trying to acquire it again might cause an exception or other problems.
Back
A lock is reentrant if it can be acquired (and released) multiple times by the same thread. If a lock is non-reentrant, trying to acquire it again might cause an exception or other problems.
After
Front
A lock is reentrant if it can be acquired (and released) multiple times by the same thread. If a lock is non-reentrant, trying to acquire it again might cause an exception or other problems.
Back
A lock is reentrant if it can be acquired (and released) multiple times by the same thread. If a lock is non-reentrant, trying to acquire it again might cause an exception or other problems.
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | A lock is {{c1::reentrant}} if it can be {{c2::acquired (and released) multiple times by the same thread}}. If a lock is non-reentrant, trying to acquire it again might cause {{c3::an exception or other problems}}. | A lock is {{c1::reentrant}} if it can be {{c2::acquired (and released) multiple times by the same thread}}. If a lock is {{c1::non-reentrant}}, trying to acquire it again might cause {{c3::an exception or other problems}}. |
Note 13: ETH::2. Semester::PProg
Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID:
modified
Note Type: Horvath Cloze
GUID:
zkq&n&o#}D
Before
Front
A lock is a token/resource that can be acquired by at most one thread at a time. Locks are typically used to enforce mutual exclusion by guarding/protecting a critical section. A lock can be acquired/locked by a thread, and is then held until it is released/unlocked. In Java, each object can be used as a lock (intrinsic/monitor lock).
Back
A lock is a token/resource that can be acquired by at most one thread at a time. Locks are typically used to enforce mutual exclusion by guarding/protecting a critical section. A lock can be acquired/locked by a thread, and is then held until it is released/unlocked. In Java, each object can be used as a lock (intrinsic/monitor lock).
After
Front
A lock is a token/resource that can be acquired by at most one thread at a time. Locks are typically used to enforce mutual exclusion by guarding/protecting a critical section. A lock can be acquired/locked by a thread, and is then held until it is released/unlocked. In Java, each object can be used as a lock (intrinsic/monitor lock).
Back
A lock is a token/resource that can be acquired by at most one thread at a time. Locks are typically used to enforce mutual exclusion by guarding/protecting a critical section. A lock can be acquired/locked by a thread, and is then held until it is released/unlocked. In Java, each object can be used as a lock (intrinsic/monitor lock).
Field-by-field Comparison
| Field | Before | After |
|---|---|---|
| Text | A {{c1::lock}} is a {{c2::token/resource that can be acquired by at most one thread at a time}}. |
A {{c1::lock}} is a {{c2::token/resource that can be acquired by at most one thread at a time}}. {{c1::Locks}} are typically used to {{c3::enforce mutual exclusion}} by guarding/protecting a critical section. A {{c1::lock}} can be {{c4::acquired/locked}} by a thread, and is then held until it is {{c5::released/unlocked}}. In Java, each object can be used as a {{c1::lock}} ({{c6::intrinsic/monitor lock}}). |