Julia support for eFEL given the Cpp code core?
I have noticed that eFEL is super fast compared to pure python feature extraction libraries.
Do you think it would be possible for a hacker to add in Julia support with a C++ wraper?
Is this the file that one would need to wrap?
Yes, it should be possible to create wrappers for other languages.
Here is the python wrapper if you need inspiration: https://github.com/BlueBrain/eFEL/blob/master/efel/cppcore/cppcore.cpp
The cfeature.h and efel.h provide the interfaces.
Beware of 1 thing though, we're slowly moving towards pure python features (https://github.com/BlueBrain/eFEL/tree/master/efel/pyfeatures). For now these won't be exposed in the C++ interface.
@wvangeit May I ask why you're moving to pyfeatures? It seems strange to move away from cpp, will this impact perfomance of the library?
Well, there are several reasons why we'd move to python. Mostly extensibility, maintainability and flexibility. First of all, a general restructuring of eFEL is long overdue. We're a bit stuck in a structure that already exists since 2010. One common issue is that we need more flexibility in how features are combined or defined. Python allows one to easily change features, even at runtime. And during the rewrite we can also think how can make these even easier for users. A common problem is also that neuroscientists have difficult contributing features because the code is C++. I'm not denying that we probably will take a hit in efficiency, but we'll try our best to minimize this. If we make smart use of numpy, scipy etc. that should be possible. My plan is definitely to put some benchmark plan in place so that we can see how the speed of feature calculation is changing over time.
On a lark, I started a conversion to pure python at one point, and other than being time-consuming it isn't too bad. On the performance side, judicious use of numpy was giving me the same speeds, iirc (this is 3-4 years ago, so I don't know for sure)
My gumption to finish it, at the time, was held up by whether or not to support all LibV? features; perhaps a pruning could happen before a pure python version to reduce the number of functions to port?
My plan is definitely to put some benchmark plan in place so that we can see how the speed of feature calculation is changing over time.
I highly recomned https://asv.readthedocs.io/en/stable/ for tracking that.
I agree with @mgeplf airspeed velocity could help. Also I have achieved C speed code in python by combining numba with numpy. Its a bit tedious, and I doubt people submitting new features would use numba on the first pass, but if core features used numba jit/vectorize I believe significant performance degradation could be avoided.
As to my initial question about Julia, it might actually be faster to write a command line API to EFEL and call that from julia using it built in access to os/shell, instead of using PyCall. The problem is PyCall calls can't always be pre-compiled so PyCall calls might slow down otherwise fast julia code. I was thinking NeuronUnit might be accessible to julia using shell calls too.
In a CLI version of EFEL I wonder if voltage and current waveforms would have to be text files, or if they could be encoded as BASH strings? Reading and writing files to disk could cause slow downs also.