specifying number of cores to use during multiprocessing?
hi, i'm wondering if there is a way to specify the number of cores you want a program to use when using the multiprocessing package?
thanksHi Sarah, I don't recall and after a quick search haven't found such a feature but wouldn't be very surprised if it did. What comes to mind is a thread worker pool which can be limited in size. Libraries like pcall or eager-future2 provide and manage such a pool, it may require a small hack to limit the pool size and then just be sure that you use the lib for all your multiprocessing. Cheers, Andy On 4 November 2015 at 05:55, Sarah Kenny <skenny.uofc@gmail.com> wrote: > hi, i'm wondering if there is a way to specify the number of cores you want > a program to use when using the multiprocessing package? > > thanks > ~sarah > _______________________________________________ Lisp Hug - the mailing list for LispWorks users lisp-hug@lispworks.com http://www.lispworks.com/support/lisp-hug.html
On 3 Nov 2015, at 20:25, Sarah Kenny <skenny.uofc@gmail.com> wrote:~sarahhi, i'm wondering if there is a way to specify the number of cores you want a program to use when using the multiprocessing package?thanks
Unable to parse email body. Email id is 13598
Since we've devolved to anecdotes: 1) When I build e.g. C++ systems, I do 'make -j <num-cores * 2>'. It generally compiles noticeably faster than 'make -j <num-cores>' for the systems I use. (Hyperthreading? Compiler IO?) 2) Some ancient Windows version (XP? 7?) benefited more from processor affinity than the contemporaneous Linux versions. Linux probably was doing a better job of keeping each thread on a given core for a longer time, by default. I remember using a monitor tool and watching individual threads on Windows switch between CPUs every refresh cycle. Setting CPU affinity made a big difference on those Windows-es and relatively little difference on those Linux-es. Jeff On Wed, Nov 4, 2015 at 10:37 AM, Tim Bradshaw <tfb@tfeb.org> wrote: > On 4 Nov 2015, at 07:30, Pascal Costanza <pc@p-cos.net> wrote: > > If you try to write parallel programs, your goal should be to not > oversubscribe the cores and create only as many threads as there are cores > (or fewer, because LispWorks has background threads, and there are also > other threads from other programs running on the same system). > > > This is slightly dangerous advice, as it depends on the program a lot. What > you really want is to keep the cores busy, but how you do that can depend a > lot on how work is spread over threads. For instance if you have some web > server with a thread per request, then you probably want enormously more > threads than you have cores since they will spend almost all of their time > waiting for I/O. I don't know if people still do write web applications > like that rather than by using select & callbacks, but they certainly used > to (it was how you were meant to do things in Java at one point, anyway). > > Even for compute-bound HPC-type workloads which I now tend to deal with the > case is not simple, as 'compute bound' often seems to mean 'waiting for some > other node to finish its bit'. Also, it turns out that the more you spend > on a computing resource the less time you spend on actually evaluating > performance, which is very weird. > > --tim (not trying to start a fight: sorry if this reads as aggressive) _______________________________________________ Lisp Hug - the mailing list for LispWorks users lisp-hug@lispworks.com http://www.lispworks.com/support/lisp-hug.html
I must be the only one having problems with network security programs then, which routinely seem to take over all the processes to Œscan¹ whatever the hell it is they¹re scanning... On 11/4/15, 7:16 PM, "David McClain" <owner-lisp-hug@lispworks.com on behalf of dbm@refined-audiometrics.com> wrote: >I have never before seen a software system so completely occupy the >resources of a machine for its own compute bound chores. _______________________________________________ Lisp Hug - the mailing list for LispWorks users lisp-hug@lispworks.com http://www.lispworks.com/support/lisp-hug.html
On Wed, Nov 4, 2015 at 2:30 AM, Pascal Costanza <pc@p-cos.net> wrote: > There are ways to abstract from the details. A popular choice for Lisp is > the lparallel package which makes parallel programming easier in Common Lisp > (but last time I heard had some issue with LispWorks in terms of performance > - don’t know if that’s still the case). The issue is only about achieving something that is rarely achieved in parallel abstractions: speeding up the classic Fibonacci function as is, without any user hints or intervention such as creating a single-threaded version of the function that executes after some cutoff point (this is what lparallel is sort of doing through macros, except the cutoff is decided by the runtime algorithm and it moves around spontaneously). My guess is that LispWorks just doesn't switch fast enough, possibly having to do with its model of going through safe points, in contradistinction to SBCL which speeds up Fibonacci quite well. Fibonacci represents the absolute worst case scenario of a cheap parallelizable function, and it's not at all a problem that a particular system cannot achieve speedup in this one case. For the vast majority of real-world uses it won't matter. Cheers, lmj _______________________________________________ Lisp Hug - the mailing list for LispWorks users lisp-hug@lispworks.com http://www.lispworks.com/support/lisp-hug.html