#GNUParallel

2025-10-29

Today I introduced a much-needed feature to #GPUSPH.

Our code supports multi-GPU and even multi-node, so in general if you have a large simulation you'll want to distribute it over all your GPUs using our internal support for it.

However, in some cases, you need to run a battery of simulations and your problem size isn't large enough to justify the use of more than a couple of GPUs for each simulation.

In this case, rather than running the simulations in your set serially (one after the other) using all GPUs for each, you'll want to run them in parallel, potentially even each on a single GPUs.

The idea is to find the next avaialble (set of) GPU(s) and launch a simulation on them while there are still available sets, then wait until a “slot” frees up and start the new one(s) as slots get freed.

Until now, we've been doing this manually by partitioning the set of simulations to do and start them in different shells.

There is actually a very powerful tool to achieve this on the command, line, GNU Parallel. As with all powerful tools, however, this is somewhat cumbersome to configure to get the intended result. And after Doing It Right™ one must remember the invocation magic …

So today I found some time to write a wrapper around GNU Parallel that basically (1) enumerates the available GPUs and (2) appends the appropriate --device command-line option to the invocation of GPUSPH, based on the slot number.

#GPGPU #ParallelComputing #DistributedComputing #GNUParallel

grayrattusgrayrattus
2025-06-30

m.youtube.com/watch?v=MeWRG1YF

My talk about and from at

If you would like to run any executable in parallel this talk might be for you!

Free Software Foundationfsf@hostux.social
2025-02-03

The January GNU Spotlight with Amin Bandali features 17 new releases, including #Artanis, #Coreutils, #Ed, #GNUMTools, #GNUParallel, #GNUShepherd, and more! Read it here: u.fsf.org/45o Big thanks to @bandali, all the devs, and other contributors!

GNU head logo.
2024-04-01

Does anyone know how to use to execute multiple different commands at once? Something like the bash `foo & bar & baz &`? Trying to speed up my system upgrade script.

Todd A. Jacobs | Rubyisttodd_a_jacobs@ruby.social
2023-09-30

@ervan Without changing your actual #RSpec or #CI setup, you can create multiple #Rake tasks to run specific tags or spec files. Then use #gnuparallel (or `parallel` from #moreutils) to run the tasks on multiple CPUs or cores. For GNU, see the `-j`, `--use-cores-instead-of-threads`, and `--use-sockets-instead-of-threads` flags.

You can also use subshells or background tasks for OS allocation in #bash, #zsh, or #fish. Also, consider threading without a GIL on #TruffleRuby.

2022-11-09

4/5 #gnuparallel is the simplest way to do some simple #parallel computing.
Several tasks in life are embarassingly parallel, and #gnuparallel is a great
way to take advantage of all the cores of my machines. I probably use 0.1% of
its features, but that's enough for me. It something takes #commandline inputs,
it can be trivially parallelized with #gnuparallel. Interestingly, #gnuparallel
is a single nearly 15,000-line #perl script.

Free Software Foundationfsf@hostux.social
2022-01-07

Happy 20th birthday to GNU Parallel! Read Parallel author Ole Tange's reflections on the occasion here: u.fsf.org/3hi #gnuparallel

TygeryderBasriU
2022-01-06

GNU Parallel's 20th birthday - Free Software Foundation - gnu.org/software/parallel/20th
Can't wait to try what can do to speed up my Thunderbird daily backup script that lately takes 30 minutes gzipping all folders :)
Happy birrhday!

2018-04-23
Speak of the GNU and it makes a release:

http://savannah.gnu.org/forum/forum.php?forum_id=9146

> The GNU Parallel 2018 book is now available: http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html

Cool! And Lulu is still a thing, apparently.

> Klaatu covers the documentation of GNU parallel in episode 12x15 http://gnuworldorder.info/

Oh hey shoutout to @klaatu! :-)

#gnuparallel

/via http://www.tuxmachines.org/node/111044
/via https://mastodon.technology/@tuxmachines/99907810922363411 @tuxmachines

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst