PRT.SchedulersDataTracking History

Hide minor edits - Show changes to output - Cancel

November 21, 2012, at 05:17 PM by 24.130.186.152 -
Added lines 1-3:

[[VMS.SchedulersDataTrackingMeeting1 | Meeting Nov 20, 2012]]
August 27, 2012, at 06:09 PM by 24.130.186.152 -
Added lines 10-12:

Here's a VMS project that has a blocked version of matrix multiply, written in the SSR language, for which a visualization tool is available that shows the cache behavior of each scheduled unit of work:
[[http://hg.opensourceresearchinstitute.org/cgi-bin/hgwebdir.cgi/VMS/VMS_Projects/VMS_Projects__MC_shared/SSR/SSR__Blocked_Matrix_Mult__MC_shared__Proj/| SSR Matrix Mult with instrumentation]]
August 03, 2012, at 05:41 AM by 24.130.186.152 -
Deleted line 0:
Changed lines 8-9 from:
Just think what it would do for improving cache misses, or on a super-computer, reduce the amount of time spent waiting for data to transfer. If some way could be worked out to calculate the probability that the data accessed by a candidate work-unit already resides in a target core's cache, then the optimal placement of work onto cores could be quickly searched for or calculated inside the scheduler. For memory-limited applications with reliable access patterns, this could have a major impact on performance.
to:
Just think what it would do for improving cache misses, or on a super-computer, reduce the amount of time spent waiting for data to transfer. If some way could be worked out to calculate the probability that the data accessed by a candidate work-unit already resides in a target core's cache, then the optimal placement of work onto cores could be quickly searched for or calculated inside the scheduler. For memory-limited applications with reliable access patterns, this could have a major impact on performance.
August 03, 2012, at 05:40 AM by 24.130.186.152 -
Added lines 1-9:

A conversation between Sean and Carole-Jean Wu of Princeton:

What about something that tracks data somehow, to detect affinity of work units?

Can you give more context for this?

Would be nice if the scheduler (assigner of work to cores), could identify blocks of data, and know which blocks a given unit of work accesses. It could then track which cache the data currently resides in, by setting the cache to contain the data attached to a work-unit assigned to the core associated to that cache. Something along those lines..
Just think what it would do for improving cache misses, or on a super-computer, reduce the amount of time spent waiting for data to transfer. If some way could be worked out to calculate the probability that the data accessed by a candidate work-unit already resides in a target core's cache, then the optimal placement of work onto cores could be quickly searched for or calculated inside the scheduler. For memory-limited applications with reliable access patterns, this could have a major impact on performance.