I have built a GHC Plugin to symbolically execute Haskell Core, which I want to optimize. Profiling the plugin seems like a good starting point, but I’m having issues with this as the plugin is run through GHC. I.e. I cannot simply supply the flags +RTS -p -RTS to my plugin like I would for an executable. My plugin is built using stack and ideally I can profile it when it compiles another package via stack build.
What I’ve tried
I’ve manually compiled a version of GHC that supports profiling, such that I can pass the flags +RTS -p -RTS to it when GHC compiles using my plugin. AFAIK, there is no registry where one can download this. I’ve compiled GHC with the following command:
Note that the build system would complain to me that the dynamic “flavour” is not possible when compiling a profiling build, hence I added no_dynamic_ghc.
When I let stack build a program with this Haskell version 9.10.2-profile for whose compilation I would like profiling results, it seems to work up until I reach the package th-orphans, at which point I get the following error:
Interpreter failed to load profiled static library HSth-compat-0.1.6-DEIPkwUWmB23kAHF3cPtBW.
Trying dynamic library instead. If this fails try to rebuild libraries with profiling support.
libHSth-compat-0.1.6-DEIPkwUWmB23kAHF3cPtBW.so: cannot open shared object file: No such file or directory
CallStack (from -prof):
GHC.Utils.Panic.Plain.cmdLineErrorIO1 (<no location info>)
GHC.Linker.Loader.$wload_dyn (<no location info>)
GHC.Linker.Loader.$wloadPackages' (<no location info> GHC.Linker.Loader.initLoaderState1 (<no location info>)
GHC.Linker.Loader.loadDecls1 (<no location info>)
GHC.Driver.Main.hscCompileCoreExpr1 (<no location info>)
...
How to proceed?
Is this really the best way to profile a GHC Plugin even? Perhaps I missed something, this seems awfully complex…
If this is the best way to proceed, how would I resolve the above compilation issue? It suggests to compile libraries with profiling support, but the GHC build only has an option to disable profiling in libs (i.e. no_profiled_libs), suggesting the libraries are profiled by default (profiled_libs doesn’t exist).
You can probably get it to dynamically profile your plugin execution only and not GHC
It will produce a profile whenever your plugin runs (so yes when running stack build of another package using that plugin)
The drawback is that this profiling mode is still in its early days and it may be harder to get good results since the stack sampling is less precise than the cost center approach
In any case, if you keep trying let us know how it goes. BTW, I can probably help more during the week.
I’ve compiled ghc-9.12.2 and this one indeed supports prof+dyn! Sadly, I still get a similar error when compiling th-optics:
[1 of 2] Compiling Language.Haskell.TH.Instances.Internal
[2 of 2] Compiling Language.Haskell.TH.Instances
<no location info>: warning: [-Wmissed-extra-shared-lib]
libHSth-compat-0.1.6-9cZvlw5vbU6HKeHkyKFeUx.so: cannot open shared object file: No such file or directory
It's OK if you don't want to use symbols from it directly.
(the package DLL is loaded by the system linker
which manages dependencies by itself).
<no location info>: warning: [-Wmissed-extra-shared-lib]
libHSth-abstraction-0.7.1.0-3uvgXgYkiPnLsSPlQnaceH.so: cannot open shared object file: No such file or directory
It's OK if you don't want to use symbols from it directly.
(the package DLL is loaded by the system linker
which manages dependencies by itself).
<no location info>: warning: [-Wmissed-extra-shared-lib]
libHSth-lift-0.8.6-ADU6QIzUiyf8Ug6Dgk0dIh.so: cannot open shared object file: No such file or directory
It's OK if you don't want to use symbols from it directly.
(the package DLL is loaded by the system linker
which manages dependencies by itself).
<no location info>: warning: [-Wmissed-extra-shared-lib]
libHSsafe-0.3.21-LDJgBWINeaq8cH0MVNaY4Q.so: cannot open shared object file: No such file or directory
It's OK if you don't want to use symbols from it directly.
(the package DLL is loaded by the system linker
which manages dependencies by itself).
<no location info>: warning: [-Wmissed-extra-shared-lib]
libHSsyb-0.7.3-BLUIubbRyT5Df5QBZXo7UY.so: cannot open shared object file: No such file or directory
It's OK if you don't want to use symbols from it directly.
(the package DLL is loaded by the system linker
which manages dependencies by itself).
<no location info>: warning: [-Wmissed-extra-shared-lib]
libHSth-expand-syns-0.4.12.0-DEEoniP898sEFF5iy4fZgt.so: cannot open shared object file: No such file or directory
It's OK if you don't want to use symbols from it directly.
(the package DLL is loaded by the system linker
which manages dependencies by itself).
<no location info>: warning: [-Wmissed-extra-shared-lib]
libHSth-reify-many-0.1.10-7PN7hl8onMWGmB5CjTyOcP.so: cannot open shared object file: No such file or directory
It's OK if you don't want to use symbols from it directly.
(the package DLL is loaded by the system linker
which manages dependencies by itself).
<no location info>: error:
/tmp/stack-44276fc4ff31a77c/th-orphans-0.13.16/src/Language/Haskell/TH/Instances.hs:1:1: fatal:
cannot find object file ‘.stack-work/dist/x86_64-linux/ghc-9.12.2/build/Language/Haskell/TH/Instances/Internal.p_dyn_o’
while linking an interpreted expression
This one seems a bit more descriptive, as it tells us about all the packages it cannot find a shared library for. Those are also just warnings. The actual error at the end complains about the previous module Language.Haskell.TH.Instances.Internal (whose compilation did succeed without warning), not exposing a dynamic object file.
I’m not sure where to continue in this avenue, but maybe you have another suggestion based on these results? I’ll try to explore ghc-stack-profiler to see what results it gives in the mean time
Sadly, ghc-stack-profiler is also giving me errors. I can run the program with profiling enabled just fine, which generates the expected ghc-9.12.2.eventlog file. When I try to convert this file with ghc-stack-profiler-speedscope, I get the following error:
$ ghc-stack-profiler-speedscope ghc-9.12.2.eventlog
ghc-stack-profiler-speedscope: Uncaught exception ghc-internal:GHC.Internal.Exception.ErrorCall:
IntMap.!: key 130096792749784 is not an element of the map
IPE backtrace:
HasCallStack backtrace:
error, called at libraries/containers/containers/src/Data/IntMap/Internal.hs:601:17 in containers-0.7-e3bc:Data.IntMap.Internal
Inspecting the file with ghc-events doesn’t really reveal much to me either. It seems the events are also generated by ghc if I do not call withStackProfilerForMyThread within the plugin…
I’m not sure how to continue in this direction either
If you share the .eventlogsomehow I can take a look
At a first glance, a key is missing, but I can’t tell right away which one it is.
Inspecting the file with ghc-events doesn’t really reveal much to me either. It seems the events are also generated by ghc if I do not call withStackProfilerForMyThread within the plugin…
The .eventlog contains all events, withStackProfilerForMyThread just makes sure the profiled events are for this thread only.
So, this is expected and totally fine.
That would be great! I’ve pushed the .eventlog to an empty repository here: GitHub - RobinWebbers/plugin-profile-eventlog. Not sure if this is a good way to share files , but I guess there is no option built into discourse.
Yes, I’ve provided those flags to the build of the plugin.
When looking through the .eventlog, is there any signal that withStackProfilerForMyThread adds events to the log at all? I’m not sure what to look for, but I could not actually find anything in it.
I am still not 100% what is happening, but it looks like the following:
If you compile the plugin with -finfo-table-map but not ghc with ipe info, then ghc-stack-profiler-speedscope will fail to translate the .eventlog as the info prov entries of the plugin are not written to the eventlog. However, we can find the IPE info, so we include it in our profile, leading to the crash we observe in your example eventlog. Fun fun fun.
Probably even if GHC is compiled with IPE, then the info prov entries added by the plugin are probably still not written to the eventlog. Perhaps a GHC bug, or intended behaviour.
It looked like lookupIPE got stuck for some keys, but I can’t reproduce this any more. Something to keep an eye out in the future.
To summarise, profiling the plugin with ghc-stack-profiler is quite tricky.
The way forward may be to compile GHC with the ipe transformer, but not to compile the plugin with -finfo-table-map.
To still get a reasonable profile for your plugin, you could use ghc-stack-annotations to insert some cost centre stack manually. Using this approach, I produced the following speedscope image:
Not great, but we can see something and with enough StackAnnotations, the profile might be helpful.
Note, this only works with GHC 9.14.1+.
So, I am getting something, but it is hard to argue this is particularly useful as is.
It is also quite unfortunate, that it is quite difficult to use StackAnnotations reliably in non IO code.
Here is my example project:
Wild ideas to get IPE information for the plugin as well:
We need the IPE info from the plugin. We could try to index them somehow via ipedb and then instruct ghc-stack-profiler-speedscope to use ipedb to find IPE info instead of relying on the .eventlog.
But how do we get the IPE info from the plugin? I am not sure.
Hello! In Liquid Haskell, a ghc plugin, the procedure documented here [link] worked for me with GHC 9.14.1.
This is inspired by this oldish post about profiling ghc [link]. The idea is to build a Haskell program that loads the source files and the plugin using the GHC API. The program is linked with the profiling runtime, and then can load plugins compiled with profiling enabled.
The procedure sometimes breaks and needs an update. It was broken for a while until I investigated it today once more.
That said, I haven’t used this tooling myself. When I had to profile time, I have used timestats a couple of times.