I have put together a re/reading
list. I felt like I needed to go Were I a decade younger, I wonder if
over some introductory material I would see Clojure lisp in a better
again in recent history. Thinking light. Its object system focuses on
about books I think are important applicativity relative to common
has been an adventure in itself. The lisp, but rather for performance
story my reading list here follows reasons in its javascript
seems to be- using computer hardware transpilation. Its deeper meaning of
implies an operating system, and applicativity is incompatible with
operating systems are written and ACL2's purpose.
operated using C, so we are stuck
with enjoying its company. However Fear arises that I need to mix in
when our concerns lie in userland, some LAPACK. I have seen a lot of
we want a high level but still Monte Carlo simulations residing in
fairly fast and compatible with the Matlab which were almost letter for
native C alien, while mostly letter an equivalent Fortran program
automatically dealing with memory by due to Matlab's convenience
default and having a dynamic object functions of all things. My
abstraction. This is where I part intuition right now is to be as far
ways with many of my C++ compadres away from R/python/Matlab-like step
here in that I go to ANSI common by step running of premade,
lisp and its compilers here, which externally sourced GPU accellerated
are not llvm or gcc compilers and embarrassingly parallel solutions. A
handle C as an alien interface corollary of that is for me to stay
rather than mostly supersetting it away from the usual embarrassingly
like C++. In that circle-of-life parallel problems. But that leaves
kind of way, having gotten a high me needing to express low level high
level language one thing to do with level mathematics myself. IMPLICITLY
it is compose proofs useful to low I am getting cold feet about just
level systems, such as by using dragging in Gabor Me\lis' Google
acl2. award-winning work which I think was
associated in some way with Franz
As it comes up I will choose a more inc. That's nice and lispy, but in
specific meaning for "read all of essence it is just the same choosing
openbsd's source" ;p. I do feel of embarrassingly parallel problems
reading this is imperative (or with premade GPU accelerations and
whatever OS you are using to control then throwing them into a GPU. I am
the machines around you). Since I am not interested in writing new
not in general trying to create an kernels for proprietary GPUs either,
operating system, I guess I seek to though I did this kind of thing in
reason about high level structure times long since. Argh, I probably
rather than participate in openbsd's need a book on Monte Carlo
development per se. integration.
Looking at the list, I am worried I have read lots of books (other
that my attention and loyalty to the than the half re-reads on my list).
Common Lisp Object System seems to There is lots of technical material
go absolutely nowhere. ACL2 is I have studied when I had to work
mainly metacircularly defined in with it that I think is bad. For
ACL2 and has its own lower example, I regret ever knowing what
level/unsophisticated notions of von I do about Nvidia cuda, Nvidia
Neumann bottlenecks for state (so no OpenCL, Intel cpu opencl, Intel fpga
common lisp classes). I'm still opencl, Intel fpga VHDL. Intel omp
trying to reckon the deeper meaning or mpi. While proprietary "open
in there. standard" implementations are
universally bad (I say having
surveilled them; this is not a
philosophical statement) that is not
my sole determiner (mpi for sort-of
example). I also do not like
llvm/opt stuff (as you can imagine
from the previous list), but it is
inevitably glued to openbsd as well
and it's not like gcc is a miracle,
so I should bite my tongue about
that. As proprietary things go, I
think Ada Spark (/gnu gnat) is
noteworthy and I can't really tell
why it seems if anything extra
invisible right now. (Aside, I don't
mean to imply that I think systemc
is a good way to program anything,
it's just I have has more exposure
to the Intel side of things).