Australis
These are our stories of sailing across the Pacific, to inspire your own adventures. You may even find some useful tidbits of technical information to help you avoid our mistakes.
Monday, January 12, 2015
End of the Line
Monday, April 29, 2013
Pyopencl (GPU) vs Numpy (CPU) Performance Comparison
So numpy must be well-optimized. I tried optimizing my GPU kernel by minimizing array index lookups, etc, but nothing I came up with made a significant difference for this simple kernel. Both the CPU and GPU gave exactly the same answer, so that's nice.
Here's the output with execution times...
GPU execution time: 0.0115399 CPU execution time: 2.7895e-05 CPU/GPU speed ratio for 10^0 kernel executions: 0.241726% Difference between the 2 answers: 0.0 GPU execution time: 0.0115771 CPU execution time: 2.19345e-05 CPU/GPU speed ratio for 10^1 kernel executions: 0.189464% Difference between the 2 answers: 0.0 GPU execution time: 0.0116088 CPU execution time: 2.19345e-05 CPU/GPU speed ratio for 10^2 kernel executions: 0.188947% Difference between the 2 answers: 0.0 GPU execution time: 0.0115681 CPU execution time: 2.59876e-05 CPU/GPU speed ratio for 10^3 kernel executions: 0.22465% Difference between the 2 answers: 0.0 GPU execution time: 0.011663 CPU execution time: 7.70092e-05 CPU/GPU speed ratio for 10^4 kernel executions: 0.660289% Difference between the 2 answers: 0.0 GPU execution time: 0.023535 CPU execution time: 0.000612974 CPU/GPU speed ratio for 10^5 kernel executions: 2.60452% Difference between the 2 answers: 0.0 GPU execution time: 0.0234549 CPU execution time: 0.0182121 CPU/GPU speed ratio for 10^6 kernel executions: 77.6472% Difference between the 2 answers: 0.0 GPU execution time: 0.0668991 CPU execution time: 0.240016 CPU/GPU speed ratio for 10^7 kernel executions: 358.773% Difference between the 2 answers: 0.0 GPU execution time: 0.567215 CPU execution time: 2.24371 CPU/GPU speed ratio for 10^8 kernel executions: 395.566% Difference between the 2 answers: 0.0With cgminer running at -I 9 on all the GPUs, the speed advantage for a GPU doesn't budge, significantly. So pyopencl is pretty effective at interrupting cgminer and prioritizing its threads.GPU execution time: 0.179582 CPU execution time: 2.7895e-05 CPU/GPU speed ratio for 10^0 kernel executions: 0.0155333% Difference between the 2 answers: 0.0 GPU execution time: 0.263615 CPU execution time: 2.31266e-05 CPU/GPU speed ratio for 10^1 kernel executions: 0.00877287% Difference between the 2 answers: 0.0 GPU execution time: 0.263666 CPU execution time: 2.40803e-05 CPU/GPU speed ratio for 10^2 kernel executions: 0.00913287% Difference between the 2 answers: 0.0 GPU execution time: 0.011616 CPU execution time: 2.81334e-05 CPU/GPU speed ratio for 10^3 kernel executions: 0.242195% Difference between the 2 answers: 0.0 GPU execution time: 0.0116951 CPU execution time: 7.60555e-05 CPU/GPU speed ratio for 10^4 kernel executions: 0.650317% Difference between the 2 answers: 0.0 GPU execution time: 0.023536 CPU execution time: 0.000617981 CPU/GPU speed ratio for 10^5 kernel executions: 2.62569% Difference between the 2 answers: 0.0 GPU execution time: 0.0236619 CPU execution time: 0.0189419 CPU/GPU speed ratio for 10^6 kernel executions: 80.0524% Difference between the 2 answers: 0.0 GPU execution time: 0.0630081 CPU execution time: 0.230431 CPU/GPU speed ratio for 10^7 kernel executions: 365.717% Difference between the 2 answers: 0.0 GPU execution time: 0.82972 CPU execution time: 2.4491 CPU/GPU speed ratio for 10^8 kernel executions: 295.172% Difference between the 2 answers: 0.0Installation was a bit tricky. You have to make sure setuptools is overriden by distribute. But Ubuntu 12.04 makes this easy. Thanks to kermit666 on SO for this simple approach to getting virtualenv wrapper and numpy up and running quickly on a fresh Ubuntu install.#!/usr/bin/env sh sudo apt-get install python-pip python-dev sudo pip install virtualenv virtualenvwrapper echo 'export PROJECT_HOME="$HOME/src"' >> $HOME/.bashrc echo 'export WORKON_HOME="$HOME/.virtualenvs"' >> $HOME/.bashrc echo 'source /usr/local/bin/virtualenvwrapper.sh' >> $HOME/.bashrc sudo apt-get install -y gfortran g++ # sudo apt-get remove -y --purge python-setuptools # start a new virtalenv project mkproject parallel pip install --upgrade distribute pip install mako numpy pyopencl
Here's the python code that ran the kernel and measured execution time. It's based on the official pyopencl example ...
import pyopencl as cl import numpy import numpy.linalg as la import time for M in range(0, 8): N = 10**M * 1000 a = numpy.random.rand(N).astype(numpy.float32) b = numpy.random.rand(N).astype(numpy.float32) ctx = cl.create_some_context() queue = cl.CommandQueue(ctx) mf = cl.mem_flags a_buf = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=a) b_buf = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=b) dest_buf = cl.Buffer(ctx, mf.WRITE_ONLY, b.nbytes) prg = cl.Program(ctx, """ __kernel void sum(__global const float *a, __global const float *b, __global float *c) { float a2 = a[gid]; float b2 = b[gid]; c[gid] = a2 * a2 + b2 * b2; } """).build() prg.sum(queue, a.shape, None, a_buf, b_buf, dest_buf) gpu_ans = numpy.empty_like(a) gpu_t0 = time.time() cl.enqueue_copy(queue, gpu_ans, dest_buf) gpu_t = time.time() - gpu_t0 print 'GPU execution time: %g' % gpu_t cpu_ans = numpy.empty_like(a) cpu_t0 = time.time() cpu_ans = a * a + b * b cpu_t = time.time() - cpu_t0 print 'CPU execution time: %g' % cpu_t print 'CPU/GPU difference in speed for %d additions: %g%% ' % (N, 200.0 * cpu_t / (gpu_t + cpu_t)) print 'Difference between the 2 answers:' print la.norm(cpu_ans - gpu_ans)
Monday, April 1, 2013
Bitcoin's Easter Sunday
Saturday, June 9, 2012
The 2nd Great Depression is Here -- and that's Good News for the Smart
I love this FT series on the economics and politics of deflation and recession.
http://ftalphaville.ft.com/blog/2012/06/08/1030801/the-end-of-artificial-scarcity/
Near as I can tell FT is saying "don't worry, be happy." Fortunately smart people worry and we are a social animal. Pack social dynmics are ushering in an era when national governments are no longer relevant to our material happiness. Barring WWIII, smart people will continue to band together into clubs, clans, coops. Just wander around Portland for a day. They barter within a web of trust, set up insurance pools, loan each other money--even mint their own fiat currencies (bitcoin) or fund grand scientific and space adventures. These pockets of stability and hope will thrive while the rest of us decide which "Like" buttons to click.
Thursday, April 26, 2012
Gardening
Thursday, April 19, 2012
Irregularly Sampled Time-Series
But markets aren't physical systems that can be probed with pure sinusoid inputs. Instead, it seems to me, your best bet is to think in terms of volumetric spans rather than time spans. Or even better, just in terms of transaction counts. What matters is how far the price moved between trades, not between days, hours or microseconds. How many different people or groups of people or computer algorithms decided to adjust their price and by how much? That's a fair gage of the "temperature" or "pressure" of the thermodynamics of the market. Of course with electronic exchanges facilitating HFT, these independent actors are getting parsed into the tiniest little chunks. So volumetric measures may be even better. That helps get over the fractal nature of the markets. In the end, even a fractal has a volume, as least it's projection in 3-space does. Hopefully the same concept makes sense for the markets, because that's the path we're crawling down with bitcrawl.
But time is money, so eventually we'll have to do the conversion back to real time, based on some average trade frequency or volume rate for a given instrument. But my guess is we'll discover a lot of hidden dynamics lurking in volumetric and transaction-count space. I hope I can find a market to give me this level of detail.Bitfloor is all I've got right now, without succumbing to the price-gouging of Bloomberg or other financial services, or public exchanges.