SLDRT CPU utilization when running a model in Kernel mode
3 views (last 30 days)
I am running SLDRT simulations in Kernel mode using a 3rd gen Core i5 CPU (4 cores). When the full model is running, the CPU utilization shows up as 100% on all cores and if another background task occurs or if I try to monitor something, the PC often times crashes.
Can you please let me know how the system resources are used when a model is run in Kernel mode?
I would also like to know, If I am to upgrade the PC that I use, what system parameters should I be focusing on (CPU, hard drive . etc)
Jan Houska on 3 May 2023
the fact that CPU utilization shows up as 100% when model runs in Run in Kernel mode is due to CPU power state being set to C0 (as defined by ACPI) for the time when the model is running. This is necessary to achieve minimum CPU latency to asynchronous events (such as timer interrupt) and the best possible real-time performance. Although this is reported by the OS as 100% CPU utilization, the CPUs are not really 100% utilized. They are just switched to highest-power state to avoid any delay caused by transitioning from lower-power states when the asynchronous event occurs. The CPUs are still used for all the operating system tasks as usually, they just don't go idle when a task is finished. This behavior will not change if you upgrade to more powerful CPU - that is, you will still see 100% CPU utilization reported by OS no matter how powerful your CPUs are and how much they are really utilized.
If you experience crashes that you think may be related to SLDRT, you should probably contact technical support and provide more detailed description of the crash - how does it look like, when it typically happens, a model that is causing it, etc. Without that kind of information, an "often crashes" situation is very difficult to diagnose.