Posted by Hari Shanmugadhasan on 13 April 2014 at 13:59
Page 42 of the April 11 draft says, "In DB2 10, all of these
getpages were classified as random." This directly contradicts page 135 of SG24-8182 which says:
"Prior to DB2 9, if a page was read in by list prefetch or dynamic prefetch and later touched by a random getpage
request, then it was re-classified as random.
In DB2 9 and 10, these buffers remained classified as sequential, because that is how they were brought in."
The real V11 behavior should be explained.
Posted by Hari Shanmugadhasan on 13 April 2014 at 14:07
Page 136 of SG24-8182 says:
"DB2 11 basically goes back to the old re-classification strategy, where if a page brought in by dynamic or list prefetch is requested by a random getpage at a later point, we will pull that buffer off of the sequential LRU chain, and classify it as random."
Which effectively means V11 goes back to V8 behavior. Why? How is it an improvement over V9 and V10, which were described as improvements over V8? What is different from the V8 behavior given "basically"?
Posted by Hari Shanmugadhasan on 13 April 2014 at 14:21
Page 42 of the April 11 draft says:
"By aligning the getpage classification with prefetch activity, you can count on the sequential synchronous I/O being a problem indicator."
Why wasn't it before? The classification of the stolen page is unchanged (sequential) since it was stolen BEFORE the getpage. So the counter seems just as useful as before.
Posted by Hari Shanmugadhasan on 13 April 2014 at 14:25
Page 42 of the April 11 draft says:
"Fast log apply and incremental copy are examples where the getpages are still classified as random."
This seems highly unlikely. If anything, the pages stay classified as sequential and remain so after a random getpage, so retain the V10 behavior.
Posted by Hari Shanmugadhasan on 13 April 2014 at 14:38
Given my earlier comments about page 42, it looks like section "2.6.1 Getpage classification" should be reexamined carefully regarding its claims about V10 and V11 behavior and implications. Plus the way it is worded.
Posted by Hari Shanmugadhasan on 14 April 2014 at 11:03
The April 13 17:32 draft appeared after my five April 13 comments about the April 11 draft, so not surprisingly none of comments have been addressed.
To explain my fourth comment's "seems highly unlikely." I thought it highly unlikely that FLA and Incremental COPY would be designed to flush the random pages from the BP with their pages since they seem relatively unimportant once they have been used. Similarly, flushing random pages by any Utility list prefetch seems bad.
Posted by Hari Shanmugadhasan on 17 April 2014 at 13:32
My post #3 is addressed by the April 16 draft adding on page 42:
"However the sequential synchronous I/O counter was only incremented for sequential prefetch. The statistic was always zero in the case of dynamic and list prefetch. herefore [sic], the statistic was of limited use."
Note the missing "T" typo.
Does it now count the page-sequential synch I/Os prior to dynamic prefetch starting, in addition to stolen pages?
Posted by Hari Shanmugadhasan on 17 April 2014 at 13:57
The April 16 draft attempts to address my #4 and #6 posts with a new paragraph at the end of page 42, which seems to recognize the claimed design is destructive, but since my posts #1 and #2 have not been addressed it seems suspect. It also includes:
"avoid the list prefetch I/O as much as possible if the same page is being updated repetitively."
Except FLA already sorted the log data to repeated page updates all together.
See also the "iOg" typo.
Posted by Hari Shanmugadhasan on 17 April 2014 at 14:27
Further to my posts #1 and #2, page 42:
"list prefetch that were not changed in DB2 11 to classify the getpages as sequential. . . . Only disorganized index scans and RID list scans were changed"
directly contradicts SG24-8180 page 389:
"Also, when DB2 is using list prefetch to read a
disorganized index or to read pages in a RID list, the Getpages will not be classified as sequential. "
Posted by Hari Shanmugadhasan on 17 April 2014 at 14:34
The April 16 page 42 addition:
"Incremental Copy . . . or even use MRU like Image Copy"
seems to contradict SG24-8180 page 389:
"to Most Recently Used (MRU) for the COPY utility.
DB2 11 further improves performance by expanding the MRU buffering to the UNLOAD utility and to the RUNSTATS utility for table spaces and indexes. In addition, the MRU processing will also be used for the UNLOAD phase of the following utilities"
Incremental is just a COPY option. No list prefetch exceptions listed
Posted by Hari Shanmugadhasan on 22 April 2014 at 12:10
The April 22 draft doesn't address the outstanding issues with page 42.
Posted by Hari Shanmugadhasan on 30 April 2014 at 15:07
The April 28 draft doesn't address the outstanding issues with page 42. Not even the typos.
I just noticed this typo on page 42:
"Fat Log Apply is active by default"
It should be "Fast" not "Fat."
Posted by Hari Shanmugadhasan on 7 May 2014 at 16:03
The May 7 draft doesn't address the outstanding issues with page 42. Not even the typos.