GHC Profiling a Cabal Project with an Interactive Application

Hi, all.

I’m trying to learn about GHC profiling. Every doc or book that I have come across only shows how to use GHC profiling on a small program, complied directly with GHC, with one or two functions, that executes these functions and then terminates.

Are there any resources that discuss using GHC profiling on a large Cabal project with an interactive application that does not terminate? Or even something like a yesod application?

I have been able to set up GHC profiling in my cabal project, so that I can get a .prof file after running a small program that terminates. But I don’t see a way to get this to work with my actual application, which won’t terminate.

EDIT: I think what I am looking for is tying the start and stop of profiling with the start and stop of a specific function within a larger application. Does anyone know how that can be achieved? So the report is generated on the termination of a given specified function, not the program as a whole.

5 Likes

Hi @morphismz.

I feel like profiling Haskell programs is both easy and really cool!
It would be great if this feeling were more widespread. If you come up with a good place to write down the following information more prominently, we should try to put it there.

To profile a Haskell executable from a large Cabal project, you need to, first and foremost, compile your executable with profiling. To do so, pass --enable-profiling to Cabal, and use late-cost-center profiling for profiling to not interfere with optimisation (--profiling-detail=late).

cabal build --enable-profiling --profiling-detail=late exe:my-executable-name

Second, you must run the program with the RTS flags to produce a profile. I suggest using -pj to produce a flame graph profile in JSON that can be loaded into https://speedscope.app. -pj must be passed after +RTS, or in between +RTS and -RTS, because it is a runtime system option.

cabal list-bin exe:my-executable-name
> /path/to/exe
/path/to/exe +RTS -pj -RTS
> <program is now running>

At this point, you can simply load the my-executable-name.prof file into https://speedscope.app to get something like:


Profiling your dependencies too

Even though this is plenty good, it’s often useful or needed to profile your dependencies too. The reason why this doesn’t happen by default is because cabal command line flags apply to local packages only. However, all dependencies fetched from hackage count as non-local.

To apply profiling and profiling-detail: late to all packages/dependencies in your project, add to the project’s cabal.project:

package *
    profiling: true
    profiling-detail: late

Now, recompile your executable and re-run with +RTS -pj and reload speedscope to get a more detailed profile with all relevant packages. You should see all packages being built with profiling, for example:

$ cabal build exe:cabal
Resolving dependencies...
Build profile: -w ghc-9.10.1 -O1
In order, the following will be built (use -v for more details):
 - base16-bytestring-1.0.2.0 (lib)  --enable-profiling (requires build)
 - base64-bytestring-1.2.1.0 (lib)  --enable-profiling (requires build)
 - echo-0.1.4 (lib)  --enable-profiling (requires build)
 - cryptohash-sha256-0.11.102.1 (lib)  --enable-profiling (requires build)
 - ed25519-0.0.5.0 (lib)  --enable-profiling (requires build)
 - hsc2hs-0.68.10 (exe:hsc2hs)  --enable-profiling (requires build)
 - hashable-1.4.7.0 (lib)  --enable-profiling (requires build)
 - alex-3.5.1.0 (exe:alex)  --enable-profiling (requires build)
 - open-browser-0.2.1.0 (lib)  --enable-profiling (requires build)
 - regex-base-0.94.0.2 (lib)  --enable-profiling (requires build)
 - safe-exceptions-0.1.7.4 (lib)  --enable-profiling (requires build)
 - splitmix-0.1.0.5 (lib)  --enable-profiling (requires build)
 - th-compat-0.1.5 (lib)  --enable-profiling (requires build)
 - tar-0.6.3.0 (lib:tar-internal)  --enable-profiling (requires build)
 - resolv-0.2.0.2 (lib:resolv)  --enable-profiling (requires build)
 - zlib-0.7.1.0 (lib)  --enable-profiling (requires build)
 - network-3.2.3.0 (lib:network)  --enable-profiling (requires build)
 - lukko-0.1.2 (lib)  --enable-profiling (requires build)
 - async-2.2.5 (lib)  --enable-profiling (requires build)
 - Cabal-syntax-3.13.0.0 (lib)  --enable-profiling (cannot read state cache)
 - regex-posix-0.96.0.1 (lib)  --enable-profiling (requires build)
 - random-1.2.1.2 (lib)  --enable-profiling (requires build)
 - network-uri-2.6.4.2 (lib)  --enable-profiling (requires build)
 - tar-0.6.3.0 (lib)  --enable-profiling (requires build)
 - Cabal-3.13.0.0 (lib)  --enable-profiling (cannot read state cache)
 - edit-distance-0.2.2.1 (lib)  --enable-profiling (requires build)
 - HTTP-4000.4.1 (lib)  --enable-profiling (requires build)
 - hackage-security-0.6.2.6 (lib)  --enable-profiling (configuration changed)
 - cabal-install-solver-3.13.0.0 (lib)  --enable-profiling (configuration changed)
 - cabal-install-3.13.0.0 (lib)  --enable-profiling (configuration changed)
 - cabal-install-3.13.0.0 (exe:cabal)  --enable-profiling (configuration changed)
20 Likes

Great info (one piece of the big puzzle at least).

You didn’t mention stopping the app. I assume speedscope.app operates on a point in time snapshot of the .prof file, though I do remember hearing about a viewer that updates as the profile data updates.

About profiling your dependencies. Is it ever realistic / useful to profile one’s app but not the dependencies, in practice ?

2 Likes

Hi @romes

Thank you, this is a great response. I have already done much of what you said, but it took quite a while to find the pieces scattered across different documentation. So its nice to have all in once place, and well laid out :slight_smile: And its great to know about speedscope.

But I think this is missing the main question I’m looking to have answered. I have already set up basic profiling in my cabal project, following essentially the same steps you describe (only I have more stuff directly in the cabal files, rather than used at the command line). And I can generate a .prof file after running a small terminating program.

The problem I’m running into is trying to profile a large interactive application. This application does not naturally stop running. In fact, if it did stop running, that would be a bug. The .prof file always seems to be generated after the completion of the program. So, with my use case, I never actually end up with a (non-empty) .prof file.

Is there some option to generate the .prof file at regular intervals, or after a specific event that has been marked in the code? How does one usually approach profiling large and constantly running applications?

tldr; I have successfully been able to use profiling on extremely simple programs. But I looking for a resource that introduces more advanced profiling techniques that are able to analyze large, constantly running applications. All the resources I have come across only go as far as profiling a trivial, terminating program.

EDIT: I should add, I do not mean that I am trying to analyze an already running application. I’m thinking more like a web application. I can build it with profiling and start running it. But once I start running it, there is no natural turn off point. There are a few natural or key events/functions which I want included in the cost centers. The profiling does not need to be instantaneous or “live”. But it should not rely on the application/program terminating.

1 Like

I have done live heap profiles on similar applications using GHC’s eventlog combined with eventlog2html. The important piece is the GHC RTS option --eventlog-flush-interval: 5.7. Runtime system (RTS) options — Glasgow Haskell Compiler 9.10.1 User's Guide

A crude live render can then be accomplished with a file-watcher like entr, e.g.

ls *.eventlog | entr -s 'eventlog2html $0'

Granted this was only for heap profiling, but hopefully it helps you. Further efforts in live profiling have led to GitHub - well-typed/ghc-eventlog-socket: Pipe the GHC eventlog stream to a UNIX domain socket and Glasgow Haskell Compiler / ghc-debug · GitLab, but these are a little more involved.

4 Likes

@velveteer Beautiful, this looks quite like what I want. It seems like I should give the whole RTS section a good read through as well.

2 Likes

I think one more thing I am looking for is to tie profiling to a specific function within the application. So profiling could start when this function starts to be evaluated, and the .prof file would be generated when this function finishes evaluating. I realize this may not be a fully sensible request, since the function could be called multiple times, or call itself. But I’m wondering if there is some utility along these lines, one that allows analyzing a specific function.

Its important that the .prof file, or what ever other logs, be generated on the termination of the function, not the program as a whole.

Another possibility is the function is used to define (theoretically) an infinite structured value, as allowed by Haskell’s non-strict semantics:

So if filterPrime is the (local) function being profiled, there’s no way to know with certainty when it will “terminate” i.e. stop being used by primes - that would require knowing exactly when primes stops being used by it’s callers, and so forth. Hence the suggestion by velveteer regarding “live rendering” - since your program is intended to run indefinitely, the logs will need to be streamed in some way to avoid running out of storage.

1 Like

You can use the --no-automatic-time-samples RTS flag along with the start and stop actions from this module: GHC.Profiling.

Alternatively you can disable automatic cost centre insertion and do so manually with SCC pragmas.

4 Likes

This is a great guide. Would you consider donating/adding it to the Cabal documentation? The documentation page was recently restructured to support typical developer tasks. One guide for profiling is missing under the Cabal guide section. :slight_smile:

@TeofilC awesome, thanks for pointing this module out to me, I couldn’t seem to find that in any of my searches. I tried using stopProfTimer. But the .prof file remains empty even after this function should have been executed by the code. What mechanism actually causes writing to the .prof file? Is there a way to trigger this with a function like stopProfTimer?

Could you explain a bit about how manual SCC insertions is an alternative to the functions in that GHC.Profiling? My understanding is that the.prof file still would not be generated until the termination of the program.

1 Like

These two techniques are helpful for only collecting profiling information for parts of the run of your program and/or only specific functions.

I don’t think it’s possible to emit .prof files as your program is running (at the moment). In general, when profilling a server style application, what I would do is start the program, hit the endpoints you want information about, stop the application and then analyse the .prof file.

Why do you want to avoid the program terminating? If this is a strong requirement for you then your best bet is what’s mentioned here: GHC Profiling a Cabal Project with an Interactive Application - #5 by velveteer (this uses the newer eventlog format which actively emits information as your program runs rather than just at the end) combined with the start stop functions I mention. You should note that profiling always has a bit of a performance cost, so it’s often not desirable to do it in production.

You’re right. It’s a helpful way to limit the output from profiling to just the functions you care about, but doesn’t help with your problem of wanting live profiling information without stopping the program.

1 Like

Yes. By all means, do add it to where you see fit.

1 Like

How come ‘late’ isn’t specified as an option for the ‘profiling-detail’ project option?

Because you’re looking at the old docs for 3.4; the docs for 3.12 (and “stable”) do list it: https://cabal.readthedocs.io/en/stable/setup-commands.html#cmdoption-runhaskell-Setup.hs-configure-profiling-detail

1 Like

@TeofilC I don’t need to avoid the program terminating. But, when I terminate the program by killing the process, I end up with an empty .prof file. I would need to significantly alter the program for it to terminate on its own. I was hoping to avoid having to build into the program some termination condition.

What do you have in mind when you mean “stop the application”? When I run my application at the command line via cabal, Ctrl + C doesn’t stop the program, so I have to kill the process.

1 Like

Pressing Ctrl+C (once, pressing multiple times will immediately kill it and not write a file) is what I had in mind. It sounds like your application has maybe overwritten the signal handler in some way? You mentioned yesod in your original post. I know warp, the webserver underlying yesod, might do something around in this area. You might want to use Network.Wai.Handler.Warp

But it sounds like the issue you are facing is that you application doesn’t honor Ctrl+C. Can you share your code?

1 Like

@TeofilC Got, so it sounds like killing the process is the reason that no .prof file is being generated. Thank you for the pointer to setGracefulShutdownTimeout. I’ll see if I can set up something like this in my code.

Unfortunately, I cannot share the code. Sorry I know that is rude and annoying as I’m asking for help. But your comments have been very helpful. I think I can get something working. I’ll spend some more time trying to figure out why Ctrl+C is not being honored. I guess I just didn’t realize that my “ungraceful” termination of the program was the reason no .prof file was being written. I thought it would be written on execution of stopProfTimer. Though it makes sense in hindsight, knowing that .prof files cannot be generated while the program is running.

Thanks again for the help!

2 Likes

No worries at all! That’s not rude at all.

2 Likes

(All of the following may be outdated.) I had a very similar question two years ago. It was about a Yesod application. Two things helped:

  1. have a control thread that catches the Ctrl+C signal and gracefully terminate the program. Beware that standard Yesod applications main = warp will catch all sorts of exceptions and try to keep the server running. See this thread for remedies.
  2. build in an option that gracefully shuts down the program after a specified time

Either ensures that the .prof file is written and not empty. However, due to the reasons listed in the question above, the .prof was not very helpful for me. In any case, I liked the way profiteur can visualize the .prof file content.

If possible, separate the functions doing all the work from the web server stuff, so that a mock application with finite lifetime can be used for profiling. It may be helpful for test suites, too.

3 Likes