Anki Deck Changes

Commit: ea18cde1 - fixed some pprog terminology

Author: tprazak <t.prazak@gmail.com>

Date: 2026-02-20T07:55:24+01:00

Changes: 13 note(s) changed (0 added, 13 modified, 0 deleted)

Note 1: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: DR#[(B?d4d
modified

Before

Front

ETH::2._Semester::PProg::Terminology
A functional unit is a component of a CPU (or core) that performs a certain task, e.g. executing integer arithmetic operations. An execution unit is one such a functional unit.

Back

ETH::2._Semester::PProg::Terminology
A functional unit is a component of a CPU (or core) that performs a certain task, e.g. executing integer arithmetic operations. An execution unit is one such a functional unit.

After

Front

ETH::2._Semester::PProg::Terminology
A functional unit is a component of a CPU (or core) that performs a certain task, e.g. executing integer arithmetic operations. An execution unit is one such example.

Back

ETH::2._Semester::PProg::Terminology
A functional unit is a component of a CPU (or core) that performs a certain task, e.g. executing integer arithmetic operations. An execution unit is one such example.
Field-by-field Comparison
Field Before After
Text A {{c1::functional unit}} is a component of a CPU (or core) that {{c2::performs a certain task}}, e.g. executing {{c3::integer arithmetic operations}}. An {{c4::execution unit}} is one such a functional unit. A {{c1::functional unit}} is a component of a CPU (or core) that {{c2::performs a certain task}}, e.g. executing {{c3::integer arithmetic operations}}. An {{c4::execution unit}} is one such example.
Tags: ETH::2._Semester::PProg::Terminology

Note 2: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: DiKJq81cx0
modified

Before

Front

ETH::2._Semester::PProg::Terminology
A program has a race condition if, during any possible execution with the same inputs, its observable behaviour (results, output, ...) may change if events happen in different order. Events here are typically scheduler interactions causing different interleavings, but could also be, e.g. changing network latency. Race condition is often used interchangeably with data race.

Back

ETH::2._Semester::PProg::Terminology
A program has a race condition if, during any possible execution with the same inputs, its observable behaviour (results, output, ...) may change if events happen in different order. Events here are typically scheduler interactions causing different interleavings, but could also be, e.g. changing network latency. Race condition is often used interchangeably with data race.

After

Front

ETH::2._Semester::PProg::Terminology
A program has a race condition if, during any possible execution with the same inputs, its observable behaviour (results, output, ...) may change if events happen in different order. Events here are typically scheduler interactions causing different interleavings, but could also be, e.g. changing network latency. Race condition is often used interchangeably with data race.

Back

ETH::2._Semester::PProg::Terminology
A program has a race condition if, during any possible execution with the same inputs, its observable behaviour (results, output, ...) may change if events happen in different order. Events here are typically scheduler interactions causing different interleavings, but could also be, e.g. changing network latency. Race condition is often used interchangeably with data race.
Field-by-field Comparison
Field Before After
Text A program has a {{c1::race condition}} if, during any possible execution with the same inputs, its {{c2::observable behaviour (results, output, ...)}} may change if {{c3::events happen in different order}}. Events here are typically {{c4::scheduler interactions causing different interleavings}}, but could also be, e.g. changing network latency. Race condition is often used interchangeably with {{c5::data race}}. A program has a {{c1::race condition}} if, during any possible execution with the same inputs, its {{c2::observable behaviour (results, output, ...)}} may change if {{c3::events happen in different order}}. Events here are typically {{c4::scheduler interactions causing different interleavings}}, but could also be, e.g. changing network latency. {{c1::Race condition}} is often used interchangeably with {{c5::data race}}.
Tags: ETH::2._Semester::PProg::Terminology

Note 3: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: FKu}=}:F8|
modified

Before

Front

ETH::2._Semester::PProg::Terminology
Given multiple threads, each executing a sequence of instructions, an interleaving is a sequence of instructions obtained from merging the individual sequences. A sequentially consistent interleaving is one where the relative order of statements from one thread is preserved.

Back

ETH::2._Semester::PProg::Terminology
Given multiple threads, each executing a sequence of instructions, an interleaving is a sequence of instructions obtained from merging the individual sequences. A sequentially consistent interleaving is one where the relative order of statements from one thread is preserved.

After

Front

ETH::2._Semester::PProg::Terminology
Given multiple threads, each executing a sequence of instructions, an interleaving is a sequence of instructions obtained from merging the individual sequences. A sequentially consistent interleaving is one where the relative order of statements from one thread is preserved.

Back

ETH::2._Semester::PProg::Terminology
Given multiple threads, each executing a sequence of instructions, an interleaving is a sequence of instructions obtained from merging the individual sequences. A sequentially consistent interleaving is one where the relative order of statements from one thread is preserved.
Field-by-field Comparison
Field Before After
Text Given multiple threads, each executing a sequence of instructions, an {{c1::interleaving}} is {{c2::a sequence of instructions obtained from merging the individual sequences}}. A {{c3::sequentially consistent}} interleaving is one where {{c4::the relative order of statements from one thread is preserved}}. Given multiple threads, each executing a sequence of instructions, an {{c1::interleaving}} is {{c2::a sequence of instructions obtained from merging the individual sequences}}. A {{c3::sequentially consistent}} {{c1::interleaving}}&nbsp;is one where {{c4::the relative order of statements from one thread is preserved}}.
Tags: ETH::2._Semester::PProg::Terminology

Note 4: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: bD]7})PU$-
modified

Before

Front

ETH::2._Semester::PProg::Terminology
Parallelism means doing multiple things at the same time (as opposed to concurrency: dealing with multiple things at the same time). Performing computations simultaneously; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. Parallelism can be specified explicitly by manually assigning tasks to threads or implicitly by using a framework that distributes tasks automatically.

Back

ETH::2._Semester::PProg::Terminology
Parallelism means doing multiple things at the same time (as opposed to concurrency: dealing with multiple things at the same time). Performing computations simultaneously; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. Parallelism can be specified explicitly by manually assigning tasks to threads or implicitly by using a framework that distributes tasks automatically.

After

Front

ETH::2._Semester::PProg::Terminology
Parallelism means doing multiple things at the same time (as opposed to concurrency: dealing with multiple things at the same time). Performing computations simultaneously; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. Parallelism can be specified explicitly by manually assigning tasks to threads or implicitly by using a framework that distributes tasks automatically.

Back

ETH::2._Semester::PProg::Terminology
Parallelism means doing multiple things at the same time (as opposed to concurrency: dealing with multiple things at the same time). Performing computations simultaneously; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. Parallelism can be specified explicitly by manually assigning tasks to threads or implicitly by using a framework that distributes tasks automatically.
Field-by-field Comparison
Field Before After
Text {{c1::Parallelism}} means {{c2::doing multiple things at the same time}} (as opposed to concurrency: dealing with multiple things at the same time). Performing computations {{c3::simultaneously}}; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. Parallelism can be specified explicitly by {{c4::manually assigning tasks to threads}} or implicitly by {{c5::using a framework that distributes tasks automatically}}. {{c1::Parallelism}} means {{c2::doing multiple things at the same time}} (as opposed to concurrency: dealing with multiple things at the same time). Performing computations {{c3::simultaneously}}; either actually, if sufficient computation units are available, or virtually, via some form of alternation. Often used interchangeably with concurrency. {{c1::Parallelism}}&nbsp;can be specified explicitly by {{c4::manually assigning tasks to threads}} or implicitly by {{c5::using a framework that distributes tasks automatically}}.
Tags: ETH::2._Semester::PProg::Terminology

Note 5: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: fcsPFI,=e0
modified

Before

Front

ETH::2._Semester::PProg::Terminology
Locality has several meanings in parallel programming: 1. Locally reason about one thread at a time (thread modularity) - simplifies correctness arguments. 2. Data locality: related memory locations are accessed shortly after each other - improves cache usage. 3. Code locality: straight-line code increases opportunities for instruction level parallelism.

Back

ETH::2._Semester::PProg::Terminology
Locality has several meanings in parallel programming: 1. Locally reason about one thread at a time (thread modularity) - simplifies correctness arguments. 2. Data locality: related memory locations are accessed shortly after each other - improves cache usage. 3. Code locality: straight-line code increases opportunities for instruction level parallelism.

After

Front

ETH::2._Semester::PProg::Terminology
Locality has several meanings in parallel programming:

  1. Locally reason about one thread at a time (thread modularity) - simplifies correctness arguments.
  2. Data locality: related memory locations are accessed shortly after each other - improves cache usage
  3. Code locality: straight-line code increases opportunities for instruction level parallelism.

Back

ETH::2._Semester::PProg::Terminology
Locality has several meanings in parallel programming:

  1. Locally reason about one thread at a time (thread modularity) - simplifies correctness arguments.
  2. Data locality: related memory locations are accessed shortly after each other - improves cache usage
  3. Code locality: straight-line code increases opportunities for instruction level parallelism.

Field-by-field Comparison
Field Before After
Text {{c1::Locality}} has several meanings in parallel programming: 1. {{c2::Locally reason about one thread at a time}} (thread modularity) - simplifies correctness arguments. 2. {{c3::Data locality}}: related memory locations are accessed shortly after each other - improves cache usage. 3. {{c4::Code locality}}: straight-line code increases opportunities for instruction level parallelism. {{c1::Locality}} has several meanings in parallel programming: <br><br><ol><li>{{c2::Locally reason about one thread at a time}} (thread modularity) - simplifies correctness arguments.</li><li>{{c3::Data locality}}: related memory locations are accessed shortly after each other - improves cache usage</li><li>{{c4::Code locality}}: straight-line code increases opportunities for instruction level parallelism.</li></ol><br>
Tags: ETH::2._Semester::PProg::Terminology

Note 6: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: fn|<.%5[Cr
modified

Before

Front

ETH::2._Semester::PProg::Terminology
A liveness property is a property of a system: "something good eventually happens". Can only be violated in infinite time. Infinite loops and starvation are typical liveness properties.

Back

ETH::2._Semester::PProg::Terminology
A liveness property is a property of a system: "something good eventually happens". Can only be violated in infinite time. Infinite loops and starvation are typical liveness properties.

After

Front

ETH::2._Semester::PProg::Terminology
A liveness property is a property of a system: "something good eventually happens". Can only be violated in infinite time. Infinite loops and starvation are typical liveness properties.

Back

ETH::2._Semester::PProg::Terminology
A liveness property is a property of a system: "something good eventually happens". Can only be violated in infinite time. Infinite loops and starvation are typical liveness properties.
Field-by-field Comparison
Field Before After
Text A {{c1::liveness property}} is a property of a system: {{c2::"something good eventually happens"}}. Can only be violated in {{c3::infinite time}}. {{c4::Infinite loops and starvation}} are typical liveness properties. A {{c1::liveness property}} is a property of a system: {{c2::"something good eventually happens"}}. Can only be violated in {{c3::infinite time}}. {{c4::Infinite loops and starvation}} are typical {{c1:: liveness properties}}.
Tags: ETH::2._Semester::PProg::Terminology

Note 7: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: r:&*y8!jY9
modified

Before

Front

ETH::2._Semester::PProg::Terminology
T_1 (sequential execution time) is the time that is required to perform some work on a single processor.

Back

ETH::2._Semester::PProg::Terminology
T_1 (sequential execution time) is the time that is required to perform some work on a single processor.

After

Front

ETH::2._Semester::PProg::Terminology
\(T_1\) (sequential execution time) is the time that is required to perform some work on a single processor.

Back

ETH::2._Semester::PProg::Terminology
\(T_1\) (sequential execution time) is the time that is required to perform some work on a single processor.
Field-by-field Comparison
Field Before After
Text {{c1::T_1}} ({{c2::sequential execution time}}) is the time that is required to perform some work on a {{c3::single processor}}. {{c1::\(T_1\)}} ({{c2::sequential execution time}}) is the time that is required to perform some work on a {{c3::single processor}}.
Tags: ETH::2._Semester::PProg::Terminology

Note 8: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: r@,<:U1c3(
modified

Before

Front

ETH::2._Semester::PProg::Terminology
Efficiency expresses how much of the available CPU performance can be used. Heavily limited by the sequential part of a program. Efficiency = S_p/p.

Back

ETH::2._Semester::PProg::Terminology
Efficiency expresses how much of the available CPU performance can be used. Heavily limited by the sequential part of a program. Efficiency = S_p/p.

After

Front

ETH::2._Semester::PProg::Terminology
Efficiency expresses how much of the available CPU performance can be used. Heavily limited by the sequential part of a program. Efficiency = {{c4::\(\frac{S_p} p\)}}.

Back

ETH::2._Semester::PProg::Terminology
Efficiency expresses how much of the available CPU performance can be used. Heavily limited by the sequential part of a program. Efficiency = {{c4::\(\frac{S_p} p\)}}.
Field-by-field Comparison
Field Before After
Text {{c1::Efficiency}} expresses {{c2::how much of the available CPU performance can be used}}. Heavily limited by {{c3::the sequential part of a program}}. Efficiency = {{c4::S_p/p}}. {{c1::Efficiency}} expresses {{c2::how much of the available CPU performance can be used}}. Heavily limited by {{c3::the sequential part of a program}}. {{c1::Efficiency}}&nbsp;= {{c4::\(\frac{S_p} p\)}}.
Tags: ETH::2._Semester::PProg::Terminology

Note 9: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: s.@YR(&3S3
modified

Before

Front

ETH::2._Semester::PProg::Terminology
The maximum possible speedup (parallelism) is T_1/T_∞. Here T_p is the time required to perform work on p processors, while T_∞ is the time required with infinite processors (only sequential part matters). T_1 is the sequential execution time.

Back

ETH::2._Semester::PProg::Terminology
The maximum possible speedup (parallelism) is T_1/T_∞. Here T_p is the time required to perform work on p processors, while T_∞ is the time required with infinite processors (only sequential part matters). T_1 is the sequential execution time.

After

Front

ETH::2._Semester::PProg::Terminology
The maximum possible speedup (parallelism) is {{c2::\(\frac{T_1}{T_\infty} \)}}. Here \(T_p\) is the time required to perform work on p processors, while \(T_\infty\) is the time required with infinite processors (only sequential part matters). \(T_1\) is the sequential execution time.

Back

ETH::2._Semester::PProg::Terminology
The maximum possible speedup (parallelism) is {{c2::\(\frac{T_1}{T_\infty} \)}}. Here \(T_p\) is the time required to perform work on p processors, while \(T_\infty\) is the time required with infinite processors (only sequential part matters). \(T_1\) is the sequential execution time.
Field-by-field Comparison
Field Before After
Text The maximum possible speedup ({{c1::parallelism}}) is {{c2::T_1/T_∞}}. Here {{c3::T_p}} is the time required to perform work on {{c4::p processors}}, while {{c5::T_∞}} is the time required with {{c6::infinite processors}} (only sequential part matters). {{c7::T_1}} is the {{c8::sequential execution time}}. The maximum possible speedup ({{c1::parallelism}}) is {{c2::\(\frac{T_1}{T_\infty} \)}}. Here {{c3::\(T_p\)}} is the time required to perform work on {{c4::p processors}}, while {{c5::\(T_\infty\)}} is the time required with {{c6::infinite processors}} (only sequential part matters). {{c7::\(T_1\)}} is the {{c8::sequential execution time}}.
Tags: ETH::2._Semester::PProg::Terminology

Note 10: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: t,$m]=w>7t
modified

Before

Front

ETH::2._Semester::PProg::Terminology
Process context is all state associated with a process, including CPU state (registers, program counter), program state (stack, heap, resource handles), and additional management information. A thread also has a context, but it is typically much smaller.

Back

ETH::2._Semester::PProg::Terminology
Process context is all state associated with a process, including CPU state (registers, program counter), program state (stack, heap, resource handles), and additional management information. A thread also has a context, but it is typically much smaller.

After

Front

ETH::2._Semester::PProg::Terminology
Process context is all state associated with a process, including CPU state (registers, program counter), program state (stack, heap, resource handles), and additional management information

Back

ETH::2._Semester::PProg::Terminology
Process context is all state associated with a process, including CPU state (registers, program counter), program state (stack, heap, resource handles), and additional management information

A thread also has a context, but it is typically much smaller.
Field-by-field Comparison
Field Before After
Text {{c1::Process context}} is all state associated with a process, including {{c2::CPU state (registers, program counter)}}, {{c3::program state (stack, heap, resource handles)}}, and {{c4::additional management information}}. A thread also has a context, but it is typically {{c5::much smaller}}. {{c1::Process context}} is all state associated with a process, including {{c2::CPU state (registers, program counter)}}, {{c3::program state (stack, heap, resource handles)}}, and {{c4::additional management information}}.&nbsp;
Extra A thread also has a context, but it is typically much smaller.
Tags: ETH::2._Semester::PProg::Terminology

Note 11: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: unYLoX/LFH
modified

Before

Front

ETH::2._Semester::PProg::Terminology
The ForkJoin framework embraces divide and conquer parallelism. Tasks can be spawned (forked) and joined by the framework. The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.

Back

ETH::2._Semester::PProg::Terminology
The ForkJoin framework embraces divide and conquer parallelism. Tasks can be spawned (forked) and joined by the framework. The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.

After

Front

ETH::2._Semester::PProg::Terminology
The ForkJoin framework embraces divide and conquer parallelism. Tasks can be spawned (forked) and joined by the framework. The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.

Back

ETH::2._Semester::PProg::Terminology
The ForkJoin framework embraces divide and conquer parallelism. Tasks can be spawned (forked) and joined by the framework. The ForkJoin framework automatically assigns tasks to Java threads and may execute multiple tasks in one thread to avoid thread context switching overhead.
Field-by-field Comparison
Field Before After
Text The {{c1::ForkJoin framework}} embraces {{c2::divide and conquer parallelism}}. Tasks can be {{c3::spawned (forked) and joined}} by the framework. The ForkJoin framework automatically assigns tasks to Java threads and may execute {{c4::multiple tasks in one thread}} to avoid {{c5::thread context switching overhead}}. The {{c1::ForkJoin framework}} embraces {{c2::divide and conquer parallelism}}. Tasks can be {{c3::spawned (forked) and joined}} by the framework. The {{c1::ForkJoin framework}}&nbsp;automatically assigns tasks to Java threads and may execute {{c4::multiple tasks in one thread}} to avoid {{c5::thread context switching overhead}}.
Tags: ETH::2._Semester::PProg::Terminology

Note 12: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: y$G_&;3^og
modified

Before

Front

ETH::2._Semester::PProg::Terminology
A lock is reentrant if it can be acquired (and released) multiple times by the same thread. If a lock is non-reentrant, trying to acquire it again might cause an exception or other problems.

Back

ETH::2._Semester::PProg::Terminology
A lock is reentrant if it can be acquired (and released) multiple times by the same thread. If a lock is non-reentrant, trying to acquire it again might cause an exception or other problems.

After

Front

ETH::2._Semester::PProg::Terminology
A lock is reentrant if it can be acquired (and released) multiple times by the same thread. If a lock is non-reentrant, trying to acquire it again might cause an exception or other problems.

Back

ETH::2._Semester::PProg::Terminology
A lock is reentrant if it can be acquired (and released) multiple times by the same thread. If a lock is non-reentrant, trying to acquire it again might cause an exception or other problems.
Field-by-field Comparison
Field Before After
Text A lock is {{c1::reentrant}} if it can be {{c2::acquired (and released) multiple times by the same thread}}. If a lock is non-reentrant, trying to acquire it again might cause {{c3::an exception or other problems}}. A lock is {{c1::reentrant}} if it can be {{c2::acquired (and released) multiple times by the same thread}}. If a lock is {{c1::non-reentrant}}, trying to acquire it again might cause {{c3::an exception or other problems}}.
Tags: ETH::2._Semester::PProg::Terminology

Note 13: ETH::2. Semester::PProg

Deck: ETH::2. Semester::PProg
Note Type: Horvath Cloze
GUID: zkq&n&o#}D
modified

Before

Front

ETH::2._Semester::PProg::Terminology
A lock is a token/resource that can be acquired by at most one thread at a time. Locks are typically used to enforce mutual exclusion by guarding/protecting a critical section. A lock can be acquired/locked by a thread, and is then held until it is released/unlocked. In Java, each object can be used as a lock (intrinsic/monitor lock).

Back

ETH::2._Semester::PProg::Terminology
A lock is a token/resource that can be acquired by at most one thread at a time. Locks are typically used to enforce mutual exclusion by guarding/protecting a critical section. A lock can be acquired/locked by a thread, and is then held until it is released/unlocked. In Java, each object can be used as a lock (intrinsic/monitor lock).

After

Front

ETH::2._Semester::PProg::Terminology
A lock is a token/resource that can be acquired by at most one thread at a time. Locks are typically used to enforce mutual exclusion by guarding/protecting a critical section. A lock can be acquired/locked by a thread, and is then held until it is released/unlocked. In Java, each object can be used as a lock (intrinsic/monitor lock).

Back

ETH::2._Semester::PProg::Terminology
A lock is a token/resource that can be acquired by at most one thread at a time. Locks are typically used to enforce mutual exclusion by guarding/protecting a critical section. A lock can be acquired/locked by a thread, and is then held until it is released/unlocked. In Java, each object can be used as a lock (intrinsic/monitor lock).
Field-by-field Comparison
Field Before After
Text A {{c1::lock}} is a {{c2::token/resource that can be acquired by at most one thread at a time}}. Locks are typically used to {{c3::enforce mutual exclusion}} by guarding/protecting a critical section. A lock can be {{c4::acquired/locked}} by a thread, and is then held until it is {{c5::released/unlocked}}. In Java, each object can be used as a lock ({{c6::intrinsic/monitor lock}}). A {{c1::lock}} is a {{c2::token/resource that can be acquired by at most one thread at a time}}. {{c1::Locks}}&nbsp;are typically used to {{c3::enforce mutual exclusion}} by guarding/protecting a critical section. A {{c1::lock}}&nbsp;can be {{c4::acquired/locked}} by a thread, and is then held until it is {{c5::released/unlocked}}. In Java, each object can be used as a {{c1::lock}}&nbsp;({{c6::intrinsic/monitor lock}}).
Tags: ETH::2._Semester::PProg::Terminology
↑ Top