Robust experiment design for BMI

As an engineer, my instinctual concern is performance – does a device or technology work as expected? How effective is it? What are the performance bottlenecks and how do we improve it?

This last point requires a good understanding of the system in question, which often requires cycles of hypothesis testing to gather the knowledge of this system. In BMI, this is especially important. Understanding the underlying neural mechanisms of control and adaptation leads to drastic performance improvements in BMI. At the same time, BMI is also the best way to understand the underlying mechanisms.

This circular relationship is however quiet troublesome in designing BMI experiments:

  • If we want to improve the performance of a wheelchair driving BMI, for example, we can come up with a new control scheme mapping the neural activities to the control space (wheelchair velocity and direction) and over sessions to see if the navigation time/distance decreases.
  • If we want to see how a monkey adapt to a BMI decoder in driving a wheelchair, for example, we can come up with some control scheme and see how the monkey’s neural activities change as it becomes more proficient at the task

In the first scenario, we are implicitly assuming a certain underlying mechanism and designing the new control scheme based on that…purely performance oriented. However, do we attribute the resulting performance improvements on improved decoder design? It certainly seems forced when the common sample size of primate BMI experiements is 2 to 3. How do we account for different learning/adaptation style? Even if a new decoder design improves the performance for all monkey subjects, can we say that indeed it is a better design, regardless of the underlying learning mechanisms?

In the second scenario, how do we decide on what control scheme to use? Do we randomly assign weights to the recorded neurons? Derive the control scheme based on ? For decades we simply recorded neurons from the motor, premotor, and somatosensory cortices and assume that they will adapt. The performance improved based on the number of neurons recorded, and the type of filter thrown at the recordings.

Given assumption of different learning models, experiments designed to test the same decoder should probably differ as well. Control experiments where weights are randomly assigned to neurons might not be sufficient when assuming the intrinsic manifold model of learning.

——————
Incomplete yet…

Advertisements
Posted in Uncategorized | Leave a comment

MATLAB: Flipping image horizontally

Sometimes I want the x-axis to go from high to low values.

The first thing came to mind is to fliplr on the data, plot the results, then flip the xTickLabel of the resulting image. But sometimes asymmetry can result in erroneous line up of the data and xlabels.

Turns out there is an elegant solution:
set(gca, 'xdir', 'reverse')

YAY!

Posted in Uncategorized | Leave a comment

vim-latex

On ubuntu, just installing vim and vimlatex-suite is not enough to enable all the vimlatex functionalities (for example, pressing “\ll” to compile). In deb, a second stage of symbolic linking should be performed with the vim addon script, as described here:

http://stackoverflow.com/questions/2664680/vim-latex-config

Posted in Uncategorized | Leave a comment

Ubuntu + Nvidia: 7 years later, they still don’t play nicely.

It’s been roughly 7 years since I started using Linux. My first distro was Ubuntu, and I remember spending two weeks of my summer before college trying to get it to work on my Dell Latitude. The problems back then were my Broadcom wireless card and Nvidia graphics card.

Fast-forward to now, I’m again using Ubuntu for my lab computer. Internet/wireless is no longer a problem (good job Canonical!). Nvidia remains a huge problem.

Specs:
Ubuntu 14.04, Nvidia GTX-760
Using the proprietary Nvidia-331, I was able to get dual-monitor to work. However, a few minor (not affecting Matlab usage for work), but extremely annoying problems were present:
1) The first problem was I am not able to log-out. Clicking the log-out button would result in a blackscreen.
2) Upon force rebooting after this blackscreen, Ubuntu would complain about an internal error: “soft lockup – CPU#0 for 22s!”. There have been some bug reports about this, but my kernel seemed to be stable.
3) I also cannot boot into any of my tty1-6. Ctrl-Alt-F1 through F6 just gives me a black screen similar to what happens after I log out. Recognizing potential similarity, I continuously hit Ctrl-Alt-F7 until X comes back up again. And voila, the same “soft lockup” error occurs.
4) In dmesg, I would see errors related to Nvidia as well, such as:


[ 22.493961] nvidia: module license 'NVIDIA' taints kernel.
[ 22.498143] nvidia: module verification failed: signature and/or required key missing - tainting kernel
[ 664.524208] init: nvidia-persistenced main process (23947) terminated with status 1

and


NVRM: Your system is not currently configured to drive a VGA console
Jun 25 23:39:52 localhost kernel: [ 19.420832] NVRM: on the primary VGA device. The NVIDIA Linux graphics driver
Jun 25 23:39:52 localhost kernel: [ 19.420834] NVRM: requires the use of a text-mode VGA console. Use of other console
Jun 25 23:39:52 localhost kernel: [ 19.420835] NVRM: drivers including, but not limited to, vesafb, may result in
Jun 25 23:39:52 localhost kernel: [ 19.420836] NVRM: corruption and stability problems, and is not supported.

Seems like Nvidia is making my computer sad. Not exactly sure why this problem keeps happening, but switching to the open-source Nouveau driver solved the problem. I have yet to see a big performance difference (but again, I mainly do non-graphics intensive processing in Ubuntu on this box).
———————–
After Nouveau, I can now log out, boot to tty, and not lock up at all!
My dual monitor had a problem of one monitor having shit resolution. Ubuntu’s native display setting does not detect resolution for my second monitor above 800×600.

Arandr does not detect the correct resolution either. Xrandr for the rescue, the following lines get my dual monitor working:

cvt 1680 1050

returns the following screen mode setup

# 1680x1050 59.95 Hz (CVT 1.76MA) hsync: 65.29 kHz; pclk: 146.25 MHz
Modeline "1680x1050_60.00" 146.25 1680 1784 1960 2240 1050 1053 1059 1089 -hsync +vsync

Then,

// Create the new mode
xrandr --newmode "1680x1050_59.95" 146.25 1680 1784 1960 2240 1050 1053 1059 1089 -hsync +vsync
// Add this new mode to my desired monitor, named "DVI-I-1", found through "xrandr -q"
xrandr --addmode DVI-I-1 "1680x1050_59.95"
// Use this new resolution
xrandr --output DVI-I-1 --mode 1680x1050

However, this does not persist after login-out, and arandr still does not detect the changes, which is strange. Google suggests two solutions:
1) Put the steps used to configure dual monitor from before into “/etc/rc.local”, the startup script; or
2) Edit “/etc/X11/xorg.conf”, which I absolutely despite.

Turns out my xorg.conf file was generated by nvidia-settings, and somehow only included one monitor. Upon deleting the file and rebooting, arandr correctly detects my monitor setup.

Posted in Uncategorized | Leave a comment

MATLAB: How to use histc() correctly

While binning some neural spikes into bins of fixed time-width, I kept getting columns of 0’s at the end of a bin. This is clearly a coding error since there’s no way that hundreds of neurons just decide not to fire at the same time at the last bin of every time interval I’m analyzing, regardless of the bin size.

The culprit, is the behavior of histc(). According to MATHWORKS:

bincounts = histc(x,binranges) counts the number of values in x that are within each specified bin range. The input, binranges, determines the endpoints for each bin. The output, bincounts, contains the number of elements from x in each bin.

However, the following gives:

>> data=[1 1.5 2 3 4 4.5 5 6 7 7 7];
>> length(data)

ans =

11

>> histc(data, [1:1:5])

ans =

2 1 1 2 1

Only 7 out of the 11 elements are actually being counted by the histc. Scrolling down to histc’s input argument docs, we see the following:

For example, if binranges equals the vector [0,5,10,13], then histc creates four bins. The first bin includes values greater than or equal to 0 and strictly less than 5. The second bin includes values greater than or equal to 5 and less than 10, and so on. The last bin contains the scalar value 13.

So the last bin just has completely different behavior from the other bins, which makes no sense to me. The correct way to bin the example data into [1:1:5] is then:


>> histc(data, [1:1:5, inf])

ans =

2 1 1 2 5 0
>> ans(1:end-1)

ans =

2 1 1 2 5

Insidious!

Posted in Uncategorized | Leave a comment

ArchLinux – Enable wireless on boot

One of the nagging problems with my Arch installation that I have been too lazy to figure out till now, is that on every reboot, I have to enable my wireless (turning WIFI on) by pression Fn+F5 (on my Lenovo Y410P). This is equivallent of executing “rfkill unblock all”.

The problem, according to this is a Kernel bug since v3.9. A quick fix is

systemctl enable rfkill-unblock@wlan.service

Posted in Uncategorized | Leave a comment

MATLAB (Linux 64 bits) R2012b-R2014a BLAS loading error (effects “bar” and “hist” functions as well)

MATLAB on linux is usually pretty robust, despite the often minor rendering issues. This one is very insidious and has pretty far-reaching consequences.

I installed MATLAB R2014a today. Running a script that worked perfectly previously in R2011b gives me an error, which is traced to the builtin function “bar.m“, used to make bar plots. Everything is fine if the input is a one-dimensional vector. However, executing the example code for “bar graph of a 2D array”:

c = load('count.dat');
Y = c(1:6,:);
figure;
bar(Y);

gives a plot with a 1×1 blue block with the bottom-left corner at the origin, no matter how much you zoom out. No error was encountered, however.

Rebooting made this work again…magically. Continuing on with my work later, I ran into an “out of memory” error, which was not surprising as I was operating on a very large file. However, the results of using the bar command again regressed to the MATLAB blue block of death! After roughly 5 reboots, the problem persisted (no magic this time).

I then tried to use “hist” to make my plots. This time, I get the following error:

Error using *
BLAS loading error:
dlopen: cannot load any more object with static TLS

This might be behind my bar problem? Searching this error it led me to this stackoverflow post, which leads to this Mathworks workaround. The solution is to download a recompiled libiomp5.so, and replace that in the Matlab installation directory. With this work around, hist works, as well as bar.

The insidious thing is, while the official bug report presents this as a linear algebra operation error, its root is actually in the population of the “dynamic thread vector” (DTV), as explained in the stackoverflow post. It vaguely makes sense to me why it would effect the usage of “bars” without throwing errors. I’m fortunate to have tried histc and ran into the BLAS loading error.

Posted in Uncategorized | Tagged , | Leave a comment