Douglas Bates <dmbates_at_gmail.com> wrote on Tue 24 Jan 2006 - 14:38:46 GMT: [snip]
More seriously, the approach to speeding up model fitting that has been most successful to date is to speed up the BLAS
(Basic Linear Algebra Subroutines), especially the Level-3 BLAS. The bulk of the computation in the Matrix package takes
place in either Lapack (for dense matrices) or CHOLMOD (for sparse matrices) code and those are based on calls to the
Levels 1, 2 and 3 BLAS. The Atlas package and K. Goto's BLAS are designed to obtain the highest level of performance
possible from the CPU on these routines. I think the easiest way of incorporating the power of the GPU into the model
fitting process would be to port the BLAS to the GPU. I also imagine that someone somewhere has already started on that.
I haven't seen anything more recent about using GPUs to compute for R, but NVIDIA just recently made available a beta of its
CUDA environment for general-purpose computation on its new 8800-series GPUs, and there seems to be a BLAS library there.

I tried to include links in this message, but I got "Message rejected by filter rule match" from r-devel-owner when I did so.

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupr-devel @
postedFeb 20, '07 at 2:19a
activeFeb 20, '07 at 2:19a

1 user in discussion

Rod Montgomery: 1 post



site design / logo © 2022 Grokbase