
Post by Admin on May 4, 2019 23:19:10 GMT
Please use this thread to report errata in Part 3 of Algorithms Illuminated (Greedy Algorithms and Dynamic Programming). Please include the page and line number. Thanks! TR



Post by giorgio on Sept 24, 2019 2:02:13 GMT
I am reviewing the answer to Quiz 15.4 (page 80). Wouldn't answers b, c and d be true (and not just c), based on the argument spelled out in footnote 28? If log(n) can be used interchangeably with log(m) given this kind of connected graphs, wouldn't the running time of O(mn) be the same as O(m^2) or O(n^2) in Big O notation?



Post by Admin on Sept 27, 2019 13:39:22 GMT
I am reviewing the answer to Quiz 15.4 (page 80). Wouldn't answers b, c and d be true (and not just c), based on the argument spelled out in footnote 28? If log(n) can be used interchangeably with log(m) given this kind of connected graphs, wouldn't the running time of O(mn) be the same as O(m^2) or O(n^2) in Big O notation? giorgio: Remember that bigO notation suppresses leading constants and lowerorder terms. It does not suppress constant factors in the exponent. Suppose m = n^2. Then it is not the case that m = O(n). (See Section 2.3.2 of Part 1.) But log m = 2 log n, so log m = O(log n). TR



Post by spamegg on Oct 4, 2019 6:08:32 GMT
In Problem 13.1 (page 21), it's given that jobs have length l_j and deadline d_j.
Then parts (b) and (c) mention something called "processing time" p_j.
p_j was never defined in the question.
I suppose p_j is supposed to be the same as l_j?
It's very confusing because at first I thought p_j was the same as completion time.
"Processing time" sounds a lot like "completion time".



Post by Admin on Nov 17, 2019 0:28:41 GMT
In Problem 13.1 (page 21), it's given that jobs have length l_j and deadline d_j.
Then parts (b) and (c) mention something called "processing time" p_j.
p_j was never defined in the question.
I suppose p_j is supposed to be the same as l_j?
It's very confusing because at first I thought p_j was the same as completion time.
"Processing time" sounds a lot like "completion time". spamegg: Good catch! Indeed, the problem should say "length" instead of "processing time" to be consistent with Section 13.2 I'll fix this in the next (second) printing. Thanks! TR



Post by dougrm on Dec 20, 2019 3:09:14 GMT
This is a suggestion, not pointing out any error. The proof(s) of correctness of the greedy scheduling algorthm (pages 1219 or so) can be simplified a lot. Short proofs go like this:
Sections 13.4.113.4.3 and maybe some of the intro of Section 13.4 can be handled with:
Derive the change in cost due to an exchange.
Suppose for an optimal schedule there is some j such that w_j/l_j < w_{j+1}/l_{j+1}. Exchanging j and j+1 results in a lower cost: contradiction. Therefore every optimal schedule has nonincreasing w_i/l_i. If there is a unique nonincreasing schedule, it has to be optimal, since an optimum always clearly exists.
[No assumptions need to be made/explained, no lemmas need to be proven, it's basically just a oneliner. Also, this proves something for the general case without first assuming noties. The noties result is a corollary.]
Now Section 13.4.4:
Suppose there are multiple nonincreasing schedules. Then there is are runs (or is a run) of w_i/l_i with identical values. No amount of exchanging within a run will change the total cost and any exchanging outside the a run will result in a nonoptimal notnonincreasing schedule. So every schedule of nonincreasing w_i/l_i has the same cost and is optimal (one of them, at least, had to be optimal to begin with since again an optimum clearly exists).



Post by dougrm on Dec 20, 2019 20:54:53 GMT
Page 14 section 13.4.2: "By assumption (1), the greedy schedule sigma schedules the jobs in order of index..."
That should be by assumptions (1) and (2) since under assumption (1) you can have multiple greedy schedules, not just the one with ascending indices that happened to show up in assumption (1).
Or you have to change assumption (1) to be that the jobs are indexed in the order selected by the GreedyRatio algorithm and so are indexed in (a very specific  not just some arbitrary) nonincreasing order of weightlength ratio.
Page 18: "As before, let sigma = 1,2,...,n denote the schedule computed by the GreedyRatio algorithm." Who knows what this is supposed to mean. Before, assumption (2) had to be included to make this a true or assumption (1) had to be changed to be exactly what is being "let"ed now. In any case, a bunch of text is messed up and there are multiple choices for fixing it up, including just going with a simpler proof.



Post by dougrm on Dec 22, 2019 1:47:26 GMT
Page 36, Huffman algorithm: "Output: the Sigmatree with minimum average leaf depth, representing the prefixfree binary code ..." Should read "Output: a Sigmatree with minimum average leaf depth, representing a prefixfree binary code ...". The argmin operations need not have unique answers, potentially resulting in different trees with the same optimality properties. Theorem 14.2 and its proof get the wording right.
Edit: Note: My sentence about argmin should just be dropped. There are more ways than just nonunique argmin's to get the multiple trees with the same average leaf depth. You could swap left and right and you could shuffle the labeling among labels at the same depth.
The proof of Theorem 14.2 doesn't always get the wording right and also has an error:
page 42 before and after the Main Idea #1 box: "The first main idea ... outputs the best one. This step boils down ... a best Sigmatreee in which ... computing the best Sigma'tree, ..." Should be "The first main idea ... outputs a best one. This step boils down ... a best Sigmatreee in which ... computing a best Sigma'tree, ..."
Page 44 just past the middle: "The Huffman algorithm outputs...that it is the best such tree:" should be "The Huffman algorithm outputs...that it is a best such tree:"
Page 46 last sentence: "it associates the optimal Sigma'tree with the optimal ..." should be "it associates each optimal Sigma'tree with an optimal ..."
Page 48 last paragraph: "outputs the bestpossible tree" should be "outputs a bestpossible tree".
The error is on page 48, the equation/inequality in the middle and the text following it. You said in footnote 13 (pg 47) that {x,y} and {a,b} could overlap and the proof would still hold. That isn't true. Let a!=b and y==a, x==b. Then p_yp_b < 0 not >= 0. What is true is that (p_x + p_y)  (p_a + p_b) >=0, but not the terms split apart as in the text.
Finally, just wondering: why use alpha and beta instead of just alpha and alpha^{1}? Also, if you just use alpha, then each of the correspondences at the bottom of page 46 just look better with a single double arrow line topped with an alpha instead of the strange two lines with opposing half arrow heads and with alpha and beta above and below.



Post by Admin on Dec 27, 2019 15:49:56 GMT
dougrm: Thanks for all the comments! This is extremely useful feedback, and I'll take it into consideration for the next (2nd) printing (along with any other comments you post in the future). One general comment: in this book series, many proofs are deliberately presented in a way that (I hope) is accessible even to readers who are quite uncomfortable with math. This sometimes results in proofs longer than what is strictly necessary. TR



Post by dougrm on Dec 28, 2019 2:23:36 GMT
Ok, a typo and trivialities:
Typo: page 66, line 1 of the pseudocode: "T = nullset" should be "T := nullset"
Trivialities: line 12 of the pseudocode: "y elementof V  X" really ought to be "y not elementof X"; otherwise, the reader's first thought might to be how to produce VX which they shouldn't ever be thinking of.
Also, I suppose an earlier book talked about implementing sets with O(1) insertions and accesses. It might be nice to remind readers that these exist (it looks like such implementations are assumed for the pseudocode). Consider a C++ user who might think of using std::set to implement the various sets of the algorithm. It's operations aren't O(1) and so messes a little with the literal text of the runtime analysis. And, are O(1) operations guaranteed or just highly likely? If such issues really are issues (I certainly don't know) then they might affect the exact statement of the pseudocode's runtime analysis.
Also, rather than having the adjacency list comment in footnote 17 page 67, it might be nice to define an O(1) mapping from vertex to incident edge set (like sticking an adjacency list on every vertex), introduce it in the Input (or Initialization), and use it in line 12 of the pseudocode. Then the real runtime cost is apparent right in the code instead of only through the supporting text.



Post by dougrm on Dec 30, 2019 19:14:05 GMT
Page 55, last sentence: "We can assume that the input graph has at most one edge between each pair of vertices;..." Indeed we can, the setbased definition of graph guarantees it (parallel edges are impossible to represent). So this sentence should probably be dropped. Alternatively, you could mention that other definitions of graph could allow multiple edges between a pair of vertices and then you could choose the cheapest.
EDIT: Nevermind, here and on page 168 footnote 1 you are obviously (well now it's obvious) saying we are justified in using this representation that does not allow parallel paths because .... . I was confused about the starting situation of the "We can assume", thinking that the definition was already a given when really the discussion was about selecting the definition.



Post by dougrm on Dec 30, 2019 19:36:25 GMT
Pages 70,71 Lemma 15.5 and pages 66, 67 and any others that mention w^*: Lemmas 15.5 mentions v^* and w^* as though those were mentioned in 15.3.4 pseudocode for the heap based Prim algorithm. However, the pseudocode uses v and w^*, not v^* and w^*. That's a problem, but a question is why use w^* at all? Just use w and then the reader doesn't need to keep track of *'s, or you might use v^* and hammer home the connection between *'s and vertices of the algorithm's selected path. Probably should go one way (no *'s) or the other (all *'s) but not the mixture in 15.3.4.
Things get more mixed up on page 75, Proof of ?theorem 15.6: Now we're back to plain v and w for the algorithm's vertices and the ^* has shifted to T^*. I think this is further motivation to go with the plain version of v and w everywhere.



Post by dougrm on Dec 31, 2019 2:27:46 GMT
Not sure where I'm going with this, but the proof of Lemma 15.5 (page 70) clearly yields a result (nearly a paraphrase of the proof) that is sharper and (I think) more useful than the MBP. It proves that at each step of Prim's algorithm where (v,w) is added to T with (v,w) crossing X, we have the cost c_{v,w} being not greater than the cost of every Xcrossing edge of any path between v and w. With the sharper result, you can quickly prove the validity of Prim's algorithm in the general case. With the MBP result you might also be able to prove validity, but not as follows. Given any spanning tree T^* (also works if you use any MST), run Prim's algorithm and at each step replace all Xcrossing edges on the vw path in T^* with (v,w). By the sharper lemma, the now modified T^* does not have a higher cost than the unmodified T^*. Also, since the (v,w) of the modified T^* is inside the X of all subsequent steps, it will never be changed by any later swaps (the swapped edges have to cross X). Keep running the algorithm and modifying T^*. After k steps, T and T^* have first k edges of the algorithm in common. At the end, the final T^* is identical with Prim's T and does not have a higher cost than the original T^*. So T's cost is not greater than any spanning tree's cost and is therefore an MST. You couldn't push this proof through if you used the weaker MBP version of Lemma 15.5 since the T^* edge to be swapped might be inside might be inside the X and damage the growing identity between T and the modified T^*. Sure you could prove it couldn't be in X, but that might be equivalent to reproving the sharper version of Lemma 15.5 (or maybe not). So maybe I'm saying MBP is too much abstraction or maybe that focusing on the distinct cost case can lead to more work and some odd complexity.



Post by dougrm on Jan 4, 2020 19:36:40 GMT
Page 146 first line: "the induction is on the value of i+j..." I think you meant to say "the induction is on the value of (i1)*n+j1...", possibly with some juggling around that last 1 depending on how you base induction. Also the line goes on with "...(the subproblem size)..." but is "the subproblem size" a correct phrase (or maybe is it the best)? It seems like "(the number of subproblems solved so far)" would be correct (better?), again with some possible 1 juggling.

