Nothing Special   »   [go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for pcm-latency command when hyper threading is off #702

Open
matte21 opened this issue Mar 11, 2024 · 9 comments
Open

Add support for pcm-latency command when hyper threading is off #702

matte21 opened this issue Mar 11, 2024 · 9 comments
Assignees

Comments

@matte21
Copy link
matte21 commented Mar 11, 2024

Hello, I need to run a program and measure the information collected by pcm-latency.

But I need to run the program on a server with hyper-threading off. pcm-latency doesn't support that at the moment: https://github.com/intel/pcm/blob/master/src/pcm-latency.cpp#L385-L389

Can you please add support for that, if it's easy to do so? If not, what are the reasons that led to the choice of not supporting that?

@rdementi
Copy link
Contributor

Thanks for asking. I will check with the developer.

@matte21
Copy link
Author
matte21 commented Mar 21, 2024

Any update on this?

@rdementi
Copy link
Contributor

thank you for your patience. I apologize for the delay. We will provide you an update soon.

@sravisun
Copy link

Hi @matte21. I am looking into the issue. Can you please provide some information on what server you are running on and what you are trying to monitor?
Thank you

@matte21
Copy link
Author
matte21 commented Mar 28, 2024

Hi @sravisun .

I'm using a two-socket system where each socket is a Xeon Silver 4114 CPU.

The OS is Ubuntu 22.04.

pcm is at release 202201.

I have a bunch of apps, and for each one I'm trying to measure their sensitivity to memory latency. Practically, this means that for each app I generate some load multiple times, where each time I set a different value for the uncore frequency (to simulate different latencies when accessing RAM), and see how much the app latency and throughput degrade as the memory latency increases.

The problem I have is that I want to make sure that the cache hit rate (for every cache level) is low. Otherwise, an app might spuriously appear as insensitive to memory latency, while in fact it's just hitting the caches very frequently.

So what I'd like to monitor is the cache hit rate for every cache level. The vanilla pcm command only has that for L3 and L2 cache. So I wanted to use pcm-latency (according to the README it can do what I want, although I haven't verified that). The problem is that in my server I have hyper-threading off (and I need it to be off). So I hit this case: https://github.com/intel/pcm/blob/master/src/pcm-latency.cpp#L385-L389 .

@sravisun
Copy link
sravisun commented Apr 1, 2024

Hi @matte21

Yes at the time it was coded we had put in a block where there was no support for offline cores.
How are you disabling hyperthreads? Directly turning it off in the BIOS?
Will need to verify if we can remove the check for offline cores incase we cannot work around this. I will get back to you on that very shortly.

@matte21
Copy link
Author
matte21 commented Apr 2, 2024

Hi @sravisun.

To disable hyperthreading, I update the grubconfig by appending nosmt to the value of the var GRUB_CMDLINE_LINUX_DEFAULT, run update-grub, and then reboot the machine.

Besides the no-HT use case we're discussing, since I opened the issue an additional use case emerged where I need pcm-latency on a server where the number of available cores is different than the number of logical cores. I have the same two-sockets server, but I manually offline all cores in only one of the two sockets by running

echo 0 > /sys/devices/system/cpu/cpu<i>/online

for all cores in said socket. I might need to do this both with HT on and off.

@sravisun
Copy link
sravisun commented Apr 4, 2024

Sure will update the code without cores offline check. Will let you know once it is ready
Thank you

@matte21
Copy link
Author
matte21 commented Apr 8, 2024

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants