IRC logs of #boinc for Sunday, 2012-09-30

00:24 *** OneMiner has joined #boinc

00:27 <OneMiner> Hi. o/     Kinda new to BONIC, I've been folding for a long time and I'm using my GPU to mine bitcoins now. Looking for a good task for my CPU. I'm interested in LHC@home and Test4Theory@home. Test4Theory sounds like an obvious win but I'm not sure about LHC@home. Anybody know if anything has come from running these simulations?

00:28 <PovAddict> I don't know if LHC@Home has work again

00:28 <OneMiner> It does, I'm crunching those numbers now.

00:29 <PovAddict> many years ago, they ran lots of simulations of the particle beam travelling around the particle accelerator, while the real thing was still in construction... and I assume those did help them

00:29 <OneMiner> Well, I guess what I'm asking is more like: Which will do more good per unit of time?

00:30 <OneMiner> Trying to make an impact on something. Wish I could donate CPU time to fusion research.

00:30 <PovAddict> I'm glad you're asking 'more good' instead of 'more credits'

00:31 <OneMiner> I couldn't care about credits. What I want is a better world. One core at a time. :3

00:31 <OneMiner> I see it as a donation. I want effect for my watts.

00:31 <PovAddict> I'm very out of date, I don't know what specific work is being done by LHC@Home now, and I don't knkw what Test4Theory is about at all

00:32 <PovAddict> but it's a personal choice... and if their websites aren't clear enough on their goals, please complain

00:32 <OneMiner> Test4Theory is crunching data on actual LHC tests. So I would recieve work units that would be discarded otherwise, I believe.

00:33 <MTughan> OneMiner: Check out WorldCommunityGrid.

00:33 <PovAddict> ohhh

00:34 <PovAddict> my multi-year-old knowledge was that there was little volunteer computing could do with the data of actual LHC experiments

00:34 <OneMiner> There's a shortage of processing power for the LHC. So they only crunch the most jucy data, discarding other bits that could potentially contain useful info.

00:35 <PovAddict> because they smashed two protons together and collected terabytes of data from different sensors after that collision alone

00:35 <OneMiner> From the horse's mouth: This project uses CERN-developed virtual machine technology for full-fledged LHC event physics simulation on volunteer computers.  Requires that you install VirtualBox on your computer

00:36 <PovAddict> and they couldn't really analyze individual fragments of data in isolation

00:36 <PovAddict> so they had to use their own supercomputers

00:36 <PovAddict> oh, VMs

00:36 <OneMiner> Oh snap! I think I got it wrong. It's a simulation. :(

00:36 <PovAddict> bleh

00:37 <OneMiner> Checking that MTughan.

00:38 <OneMiner> Darn.

00:38 <OneMiner> VMs don't bother me. 4GB of RAM so I'm ok.

00:40 <OneMiner> I loves me some LHC but these projects don't sound very useful.

00:43 <OneMiner> Ok, I'll just look around and idle here for a bit. Maybe something will pop up.

00:44 <MTughan> I only crunch for two projects right now: WCG and PrimeGrid. I don't think you'll be all that interested in PrimeGrid, but it's an option.

00:45 <OneMiner> Not unless primes can help people in some way.

00:46 <MTughan> They help with some unproven-as-yet math theorems, but that's about it.

00:46 <PovAddict> mathturbation

00:46 <OneMiner> I coulda sworn there was a boinc project that had to do with fusion research. That would be my top pick considering climate change and everything.

00:47 <OneMiner> But alas.....

00:48 <MTughan> Well, here's a big list of projects. http://boincstats.com/en/page/projectPopularity

00:48 <Romulus> Title: BOINCstats/BAM! | Project popularity (at boincstats.com)

00:48 <OneMiner> I'll take a look.

00:49 <MTughan> You could always crunch for BURP. :P

00:49 <MTughan> (j/k)

00:49 <OneMiner> Over my head.

00:50 <OneMiner> Nah, just looked it up.

00:50 <OneMiner> If we had fusion I'd go for it.

00:50 <OneMiner> Ummm, that's confusing. If fusion power plants were real I'd do BURP.

00:51 <MTughan> BURP stands for the Big Ugly Rendering Project. You render scenes using a program called Blender.

00:51 <MTughan> Nothing to do with Fusion, I was just making a joke.

00:51 <PovAddict> BURP is 3D image rendering

00:51 <OneMiner> Got it. I'm on the project page now.

00:51 <PovAddict> you won't help humanity :P

00:52 <OneMiner> Given free power I wouldn't see why not. But my clocks are tied to CO2 so I want something that does much good.

00:52 <OneMiner> (to offset power used)

00:53 <PovAddict> buy solar panels too

00:53 <OneMiner> haha not in a position to. I'd love it though. All I can give is cycles ATM.

00:54 * PovAddict recently spent more than 24 CPU hours in http://stuff.povaddict.com.ar/flame/test.html

00:55 <OneMiner> Nice. Fractal?

00:55 <PovAddict> yep

00:55 <PovAddict> zoom in!

00:55 <OneMiner> Me likey.

00:56 <OneMiner> Fractals amaze and inspire me to a degree. Shocking and sense making that nature uses fractal designs.

00:56 <PovAddict> the entire image at the deepest zoom level is around 100 megapixels

01:00 <PovAddict> I had to render it in 32 sections, then I joined them together, and split the resulting big image into tiles for the image zooming software

01:00 <PovAddict> I had to use 32 sections for the render because of RAM usage

01:00 <OneMiner> Wow, how much did it consume?

01:01 <OneMiner> Also, how much would it have consumed to render it all at once?

01:01 <PovAddict> at that resolution you would need like 50GB of RAM to render it in one go

01:01 <OneMiner> Dang.

01:01 <PovAddict> it's directly proportional to the image size

01:02 <PovAddict> the processing ends up being quite wasteful, but it's unavoidable

01:03 <OneMiner> Sure.

01:03 <PovAddict> it takes random samples, and if after all the transformations by the math functions, the sample ends up off screen, it's just dropped

01:04 <OneMiner> Give it 20 years and that processing will be trival and efficient.

01:04 <OneMiner> Don't follow so much. But that's ok. I'll need to get some sleep soonish. Brain no workie.

01:04 <PovAddict> if I render the whole image, I need X RAM, and all of the samples end up in the image

01:05 <PovAddict> if I render only the top half of the image, I only need X/2 RAM, but all samples that end up in the bottom half are thrown away

01:06 <PovAddict> and when I render the bottom half, it will calculate *exactly the same samples*, it will just keep the ones that fall on the bottom and drop what falls on the top

01:06 <OneMiner> Oh wow I get it. All the corners are cropped. There has to be a better way....

01:06 *** Edgeman2 has joined #boinc

01:06 <OneMiner> That is a lot of wasted cycles. Especially because you had to chop it so many times.

01:07 <PovAddict> what's annoying is that it does the same calculations, it just keeps a different subset of the results

01:08 <efc> Clearly, we need to get you a 256 gb ram motherboard.

01:08 *** Edgeman has quit IRC

01:08 <OneMiner> Perhaps there's a "messy sides" setting?

01:08 <OneMiner> I dunno, that probably wouldn't even work because you'd have to join them after.

01:09 <PovAddict> efc: one plan I had was to use directly-networked computers

01:09 <OneMiner> I'm going to build a video editing box for a friend of mine soonish. 32GB of RAM, so sweet.

01:10 <PovAddict> efc: have two computers calculate random samples using different seeds, if the result falls on 'its half', then it's added to the local buffer, if it falls on the other half then it's sent over the network to the other computer

01:12 *** yoyo[RKN] has joined #boinc

01:12 <PovAddict> both computers keep a buffer with half the image, accumulating the samples they render that fall on that half and the samples they receive from the other pc

01:13 <PovAddict> efc: however, I fear that exchanging raw samples like that will need way too much bandwidth, and I don't feel like setting up fibre channel at home

01:13 <efc> Is this working by starting in screen space and casting backwards, or light source casting forwards?

01:14 <PovAddict> or whatever bus it is that HPC clusters use nowadays

01:14 <OneMiner> There's eSATA, could you use that?

01:14 <MTughan> Probably too low-level.

01:14 <OneMiner> Right on. Over my head again.

01:14 <PovAddict> efc: it's an IFS, there's not much of a concept of 'light' :P

01:15 <OneMiner> Booting into VM for Test4Theory now.

01:16 <efc> Maybe Box A could render square A, keep track/accumulate scraps 50% outside of the box, forward those when done

01:17 <OneMiner> Hmm.... Box B dosen't need a fast connection now that I think of it. It can queue it up right? As long as box A can store the data it needs to send it dosen't care how fast the transfer is. Right?

01:18 <PovAddict> accumulating the samples outside the box won't work, since that's what the algorithm always does, and as I said the accumulation buffer eats RAM

01:19 <PovAddict> storing the actual sample data instead of the accumulated buffer might work, since I can do it on disk

01:19 <PovAddict> since it would be a big sequential file, not random access

01:19 <PovAddict> but I don't know how big

01:19 <PovAddict> isn't gigabit ethernet faster than hard disks even in practice?

01:19 <MTughan> No.

01:20 <MTughan> SATA 6Gb/s SSDs can push 550MB/s+. Gigabit theoretically tops out at 125MB/s.

01:20 <efc> probably better latency

01:20 <MTughan> I'm not so sure about that, but they're likely more comparable there.

01:21 <PovAddict> latency doesn't matter too much

01:22 <PovAddict> if latency affects throughput, I can use larger buffers :P

01:23 <PovAddict> I bet TCP could cope with earth-mars communication if you could set the TCP sliding window to 1GB or so :P

01:25 <PovAddict> it would be great if I could just mmap a 100GB disk file to use as image accumulation buffer

01:25 <PovAddict> unfortunately, the access is literally random

01:25 <PovAddict> and uniform enough that disk caches can't help much

01:26 <PovAddict> disk caches work under the assumption that the cached data is significantly more likely to be accessed soon than anything else

01:30 <PovAddict> hm, perhaps if the function isn't *too* chaotic, the random inputs could be adjusted to make the outputs more likely to have cache locality

01:32 <PovAddict> eg. generate a million random points, *sort them*, and run them through the IFS, maybe that will make the transformed points go in a reasonable order such that a LRU disk cache can help

01:51 <efc> 505 gigs, for a 100megapixel image? You'd think 100mp->400megs, maybe 800 if higher preceision

01:52 <PovAddict> I didn't say 500 gigs :P

01:52 <PovAddict> but... it's a bunch of floats per pixel

01:52 <efc> oops, 50 gigs, sorry

01:56 <efc> 4x64bit floats, 3.2 gigs

01:57 <PovAddict> maybe there's some supersampling going on

01:57 <efc> would need to understand the rendering process better to say much

02:00 <efc> sounds like good material to write research materials forever

02:03 <PovAddict> indeed supersampling

02:03 <PovAddict> the input file says supersample=4

02:03 <PovAddict> so multiply your estimates by 16 :P

02:04 <PovAddict> I should try rendering with supersample=1 to understand its visual effects better

02:07 <PovAddict> efc: there is a nice summary of the algorithm on http://en.wikipedia.org/wiki/Fractal_flame

02:07 <Romulus> Title: Fractal flame - Wikipedia, the free encyclopedia (at en.wikipedia.org)

02:08 <PovAddict> and the full details in http://flam3.com/flame.pdf

02:10 <efc> hmm i see what you mean, probably almost no locality of reference

02:16 <efc> or maybe not, i dunno, i'd probably turn down the res and declare victory.

04:11 *** efc has quit IRC

04:16 *** pppingme has quit IRC

04:17 <dddh> hm

04:17 <dddh> current desktop PCs have 64 gb of ram

04:18 <synapt> I think you mean 'can'?

04:18 <synapt> Definitely don't see many desktop PC's with 64 gigs by default

04:18 <synapt> :P

04:21 <dddh> my current desktop has 64 gb (8x8) + i7 3930K + nvidia gtx 580

04:22 <dddh> synapt: thought about using it for einstein @ home

04:24 *** pppingme has joined #boinc

04:28 <synapt> dddh: higher end i7's will utilize it, but below that not too much

04:28 <synapt> lower-to-mid end i7's and the i5 series max out at 32 generally

04:28 <synapt> getting mobos that support it aren't generally too hard, just the CPU w/ Intels now

04:33 <dddh> synapt: I've seen "extreme" motherboards with 8 slots for ram with 2011 socket, it means they are _higher end_ ones

04:35 <synapt> Yeah but those I wouldn't call "Desktop PC's", lol

04:35 <synapt> those are generally people looking for a borderline home server system of some sort to do high-memory stuff

04:36 <dddh> but i7 means 6 cores with ht, that is almost 12 cpus

04:36 <dddh> linux users should feel more l33t

04:40 *** pppingme has quit IRC

04:41 *** pppingme has joined #boinc

05:00 *** yoyo[RKN] has quit IRC

05:11 <synapt> just means 12 threads :P

05:11 <synapt> also keep in mind most i7's are still only quad+HT

05:13 <dddh> 6 cores + ht

05:14 <dddh> synapt: http://ark.intel.com/products/63697/Intel-Core-i7-3930K-Processor-12M-Cache-up-to-3_80-GHz

05:14 <Romulus> <http://tinyurl.com/cnvuhqq> (at ark.intel.com)

05:14 <synapt> dddh: I said most, not all

05:14 <synapt> :P

05:14 <dddh> ok ;)

05:15 <synapt> that is literally the -only- 2nd gen i7 with 6 if I recall correctly

05:15 <synapt> well 2 if you count the 'Extreme'

05:16 <synapt> all the 3rd gens are only 4+HT as well I belive currently

05:19 <dddh> heh

05:20 <dddh> checked my pc stats - it doesn't have a coprocessor

05:20 <dddh> I wish I could use GPU instead of CPUs

05:21 <synapt> no nVidia?

05:21 <synapt> or no you said you do

05:21 <synapt> don't most BOINC projects utilize Cuda processing now?

05:21 <dddh> does it dlopen libcudart.so from ~boinc?

05:21 * synapt hasn't really run any for awhile now :/

05:21 <synapt> dunno

05:21 <synapt> really should start running some stuff again

05:22 <synapt> this box would be pretty good for it when I sleep

05:23 <dddh> I attached 5 computers, one of them is windows xp(my wife still uses it) with "NVIDIA GeForce GT 610 (1023MB) driver: 30623"

05:23 <dddh> boinc said it has a coprocessor

05:23 <dddh> but that is windows ;(

05:24 <dddh> probably I should examine source code or google

05:38 <dddh> synapt: seems like the problem with cuda was /dev/nvidia* permissions, in boinc chroot group "video" had different id :(

05:41 *** desti has joined #boinc

05:43 *** desti_T2 has quit IRC

05:45 *** gilbux has joined #boinc

05:59 *** Caterpillar has joined #boinc

06:02 *** Caterpillar has quit IRC

06:04 *** Caterpillar has joined #boinc

06:05 *** mapreri has joined #boinc

06:41 *** ntat has joined #boinc

06:41 <ntat> Hi

06:43 <ntat> Where can I find value of Pi with very much places after coma?

06:49 <Caterpillar> quantify "very much places"

06:49 <ntat> 100 or 1000

07:17 *** si has joined #boinc

07:23 <dddh> oh

07:24 <dddh> einstein.phys.uwm.edu reports I have windows xp has "NVIDIA GeForce GT 610 (1023MB) driver: 30623" and linux has "NVIDIA GeForce GTX 580 (133297663MB)"

07:25 <dddh> linux > windows

07:25 <dddh> ;D

07:26 *** yoyo[RKN] has joined #boinc

07:29 *** Edgeman2 is now known as Edgeman

07:53 *** mapreri has quit IRC

07:55 <ntat> dddh, great hardware:D

08:00 *** DexterLB has quit IRC

08:06 *** DexterLB has joined #boinc

08:08 *** mapreri has joined #boinc

08:08 *** mapreri has quit IRC

08:08 *** mapreri has joined #boinc

08:30 *** NudgeyNR|2 has quit IRC

08:31 <mapreri> help

08:33 <mapreri> Sorry i used /amsg by mistake

08:45 *** NudgeyNR has joined #boinc

08:53 *** yoyo[RKN] has quit IRC

08:54 *** OneMiner has quit IRC

09:19 <dddh> ntat: probably NVIDIA driver/sdk bug or linux boinc

09:19 <dddh> actually it has 1.5 gb of ram

09:20 <dddh> is it possible to have gtx 580 x 2 sli on linux with intel motherboard?

09:24 *** MarcoFe has joined #boinc

09:56 *** gilbux has quit IRC

10:01 *** si has quit IRC

10:02 *** yoyo[RKN] has joined #boinc

10:12 *** gilbux has joined #boinc

11:10 *** DexterLB has quit IRC

11:15 *** DexterLB has joined #boinc

11:25 *** yoyo[RKN] has quit IRC

11:39 *** ntat has quit IRC

12:23 *** yoyo[RKN] has joined #boinc

12:24 *** |Caterpillar| has joined #boinc

13:07 *** gilbux has quit IRC

13:13 <desti> http://www.stepstone.de/stellenangebote--Scientific-Programmer-specialized-in-high-performance-computing-Project-description-Hamburg-Deutsches-Klimarechenzentrum-GmbH--2318894-inline.html

13:13 <Romulus> <http://tinyurl.com/8age4f2> (at www.stepstone.de)

15:04 *** whynot has joined #boinc

15:28 *** Aeternus has joined #boinc

15:28 *** Aeternus has quit IRC

15:28 *** Aeternus has joined #boinc

15:56 *** mapreri has quit IRC

16:12 *** mapreri has joined #boinc

16:17 *** MTughan has quit IRC

16:17 *** MTughan_ has joined #boinc

16:18 *** MTughan_ is now known as MTughan

16:18 *** efc has joined #boinc

16:19 *** MarcoFe has quit IRC

16:20 *** mapreri has quit IRC

16:26 <Romulus> New news from boinc: OProject@Home launches

17:32 *** Aeternus has quit IRC

17:39 *** |Caterpillar| has quit IRC

18:03 *** freakazoid0223 has joined #boinc

18:05 *** whynot has quit IRC

18:26 *** efc has quit IRC

19:27 *** Caterpillar has quit IRC

20:00 *** zombie67 has joined #boinc

20:02 *** efc has joined #boinc

20:37 *** MTughan has quit IRC

20:37 *** MTughan_ has joined #boinc

21:12 *** yoyo[RKN] has quit IRC

22:46 *** efc has quit IRC

23:48 <dddh> I have more errors with optimized apps than I expected

23:48 <dddh> should I disable optimized apps?

23:49 <dddh> just spending my CPU time and boinc is waiting for me to complete them too!

23:51 <Tank_Master> make sure you have the approperate app

23:57 <dddh> Tank_Master: all "SixTrack v444.01 (pni)" tasks from lhc@home have status "Error while computing",

Generated by irclog2html.py 2.4 by Marius Gedminas - find it at mg.pov.lt!