Edited, memorised or added to reading queue

on 01-Jan-2024 (Mon)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Situations of the result of Hamming Decoding

Using the resultant \(n − k\) = 3-bit syndrome \(\mathbf{s}\), the Hamming decoder decides if it thinks there are any bit errors in \(\hat{\mathbf{y}}\):
- If the syndrome is \(\mathbf{s} = \begin{array}{ccc}[ 0 &0 &0]\end{array}^{T}\) then the Hamming decoder thinks there are no bit errors in \(\hat{\mathbf{y}}\) (it may be wrong though). In this case, it outputs \(\hat{\mathbf{x}} = \begin{array}{cccc}[ \hat{y}_{3} &\hat{y}_{5} &\hat{y}_{6} &\hat{y}_{7} ]\end{array}^{T}\) since \(y_{3} = x_{1}\), \(y_{5} = x_{2}\), \(y_{6} = x_{3}\) and \(y_{7} = x_{4}\) in \(\mathbf{G}\).
- If the syndrome \(\mathbf{s}\) is not equal to \(\begin{array}{ccc}[ 0 &0 &0]\end{array}^{T}\) then its 3-bit number is converted into a decimal number \(i ∈ [1, 7]\). In this case, the Hamming decoder thinks that the ith bit in \(\hat{\mathbf{y}}\) has been flipped by a bit error (it may be wrong though). The Hamming decoder flips the \(i\)th bit in \(\hat{\mathbf{y}}\) before outputting \(\hat{\mathbf{x}}=\left[\begin{array}{llll}\hat{y}_3 & \hat{y}_5 & \hat{y}_6 & \hat{y}_7\end{array}\right]^T\). If there are multiple bit errors in the received codeword \(\hat{\mathbf{y}}\), the syndrome \(\mathbf{s}\) identifies which bit of \(\hat{\mathbf{y}}\) can be toggled to give the legitimate permutation of \(\mathbf{y}\) that is most similar.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Flashcard 7608436919564

Question
Using the resultant \(n − k\) = 3-bit syndrome \(\mathbf{s}\), the Hamming decoder decides if it thinks there are any bit errors in \(\hat{\mathbf{y}}\): [Answer all the situations of Hamming Decoding]
Answer

- If the syndrome is \(\mathbf{s} = \begin{array}{ccc}[ 0 &0 &0]\end{array}^{T}\) then the Hamming decoder thinks there are no bit errors in \(\hat{\mathbf{y}}\) (it may be wrong though). In this case, it outputs \(\hat{\mathbf{x}} = \begin{array}{cccc}[ \hat{y}_{3} &\hat{y}_{5} &\hat{y}_{6} &\hat{y}_{7} ]\end{array}^{T}\) since \(y_{3} = x_{1}\), \(y_{5} = x_{2}\), \(y_{6} = x_{3}\) and \(y_{7} = x_{4}\) in \(\mathbf{G}\).
- If the syndrome \(\mathbf{s}\) is not equal to \(\begin{array}{ccc}[ 0 &0 &0]\end{array}^{T}\) then its 3-bit number is converted into a decimal number \(i ∈ [1, 7]\). In this case, the Hamming decoder thinks that the ith bit in \(\hat{\mathbf{y}}\) has been flipped by a bit error (it may be wrong though). The Hamming decoder flips the \(i\)th bit in \(\hat{\mathbf{y}}\) before outputting \(\hat{\mathbf{x}}=\left[\begin{array}{llll}\hat{y}_3 & \hat{y}_5 & \hat{y}_6 & \hat{y}_7\end{array}\right]^T\). If there are multiple bit errors in the received codeword \(\hat{\mathbf{y}}\), the syndrome \(\mathbf{s}\) identifies which bit of \(\hat{\mathbf{y}}\) can be toggled to give the legitimate permutation of \(\mathbf{y}\) that is most similar.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Situations of the result of Hamming Decoding
Using the resultant \(n − k\) = 3-bit syndrome \(\mathbf{s}\), the Hamming decoder decides if it thinks there are any bit errors in \(\hat{\mathbf{y}}\): - If the syndrome is \(\mathbf{s} = \begin{array}{ccc}[ 0 &0 &0]\end{array}^{T}\) then the Hamming decoder thinks there are no bit errors in \(\hat{\mathbf{y}}\) (it may be wrong though). - In this case, it outputs \(\hat{\mathbf{x}} = \begin{array}{cccc}[ \hat{y}_{3} &\hat{y}_{5} &\hat{y}_{6} &\hat{y}_{7} ]\end{array}^{T}\) since \(y_{3} = x_{1}\), \(y_{5} = x_{2}\), \(y_{6} = x_{3}\) and \(y_{7} = x_{4}\) in \(\mathbf{G}\). - If the syndrome \(\mathbf{s}\) is not equal to \(\begin{array}{ccc}[ 0 &0 &0]\end{array}^{T}\) then its 3-bit number is converted into a decimal number \(i ∈ [1, 7]\). - In this case, the Hamming decoder thinks that the ith bit in \(\hat{\mathbf{y}}\) has been flipped by a bit error (it may be wrong though). The Hamming decoder flips the \(i\)th bit in \(\hat{\mathbf{y}}\) before outputting \(\hat{\mathbf{x}}=\left[\begin{array}{llll}\hat{y}_3 & \hat{y}_5 & \hat{y}_6 & \hat{y}_7\end{array}\right]^T\).







在整本书中,我们使用最古老的教学方法之一——对话(dialogue)。
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Law of Total Probability

Law of Total Probability: If \(M_{i}\), \(i\) = 1, 2, 3, . . . , is a partition of \(Ω\) then \(P(A) = \sum\limits_{i} P(A ∩ M_{i}) = \sum\limits_{i} P(A|M_{i})P(M_{i})\)

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Flashcard 7609023859980

Question

Law of Total Probability: [...]

Answer

If \(M_{i}\), \(i\) = 1, 2, 3, . . . , is a partition of \(Ω\) then \(P(A) = \sum\limits_{i} P(A ∩ M_{i}) = \sum\limits_{i} P(A|M_{i})P(M_{i})\)


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Law of Total Probability
Law of Total Probability: If \(M_{i}\), \(i\) = 1, 2, 3, . . . , is a partition of \(Ω\) then \(P(A) = \sum\limits_{i} P(A ∩ M_{i}) = \sum\limits_{i} P(A|M_{i})P(M_{i})\)







Bayes’ Theorem

Bayes’ Theorem: For any two events \(A\) and \(B\) with \(P(B) > 0\), \(P(A|B) = P(B|A) \frac{P(A)}{P(B)}\)

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Flashcard 7609029365004

Question

Bayes’ Theorem: [...]

Answer
For any two events \(A\) and \(B\) with \(P(B) > 0\), \(P(A|B) = P(B|A) \frac{P(A)}{P(B)}\)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Bayes’ Theorem
Bayes’ Theorem: For any two events \(A\) and \(B\) with \(P(B) > 0\), \(P(A|B) = P(B|A) \frac{P(A)}{P(B)}\)







随机变量的期望值
The expected value of \(X\) is \(\displaystyle\operatorname{E}[X]=\int_{\Omega}X(\omega)P(d\omega)=\int_\mathbb{R}x\mu(dx)\)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Flashcard 7609038540044

Question
The expected value of \(X\) is [...]
Answer
\(\displaystyle\operatorname{E}[X]=\int_{\Omega}X(\omega)P(d\omega)=\int_\mathbb{R}x\mu(dx)\)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

随机变量的期望值
The expected value of \(X\) is \(\displaystyle\operatorname{E}[X]=\int_{\Omega}X(\omega)P(d\omega)=\int_\mathbb{R}x\mu(dx)\)







随机变量的方差

The variance of \(X\) is:

\(\begin{aligned}\operatorname{Var}[X] & =\mathbf{E}\left[(X-\mathbf{E}[X])^2\right] \\& =\int_{\Omega}(X(\omega)-\mathbf{E}[X])^2 P(d \omega)=\int_{\mathbb{R}}(x-\mathbf{E}[X])^2 \mu(d x)\end{aligned}\)

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Flashcard 7609042734348

Question
The variance of \(X\) is: [...]
Answer

\(\begin{aligned}\operatorname{Var}[X] & =\mathbf{E}\left[(X-\mathbf{E}[X])^2\right] \\& =\int_{\Omega}(X(\omega)-\mathbf{E}[X])^2 P(d \omega)=\int_{\mathbb{R}}(x-\mathbf{E}[X])^2 \mu(d x)\end{aligned}\)


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

随机变量的方差
The variance of \(X\) is: \(\begin{aligned}\operatorname{Var}[X] & =\mathbf{E}\left[(X-\mathbf{E}[X])^2\right] \\& =\int_{\Omega}(X(\omega)-\mathbf{E}[X])^2 P(d \omega)=\int_{\mathbb{R}}(x-\mathbf{E}[X])^2 \mu(d x)\end{aligned}\)