# Background

For a while now, there has been a need in a project of mine to perform a large-scale matrix multiplication. Here, large-scale involves the multiplication of a 2000x2000 square matrix with another matrix containing labels. A simpler example is projecting a matrix containing a set of class features into a 2-d Cartesian space for easy visualization of a problem. Although it's not what is currently in use, this projection was considered a solution for CuZero, one of the research projects that I worked on.

- , where is the number of samples in the test and is the number of classes; each row in the matrix is the score of a single sample for all classes
- , where is still the class count and is the dimensionality of the new visualization space; for simplicity, this example sets for a 2-d visualization (below)

While this code could (and is currently) written in PHP, there is a lot of wasted memory and retrieval time getting all column/rows fromthe database into the script. Additionally, there are wasted cycles from matrix sparsity that results from a similarity matrix, as defined by this top-k optimization post. So, with this as the backdrop, let's dig into the data and procedure.

# Motivating Example

Okay, to better illustrate the problem, synthesized data is provided. Matlab was used to do it, but assume that you have data for any problem. For example, say that you took a poll about how much people liked three sports: baseball, basketball, football. Now, you want to visualize how the population that you polled can vary their preferences between the three sports in an intuitive 2-d graphic. The code to make such an example is below and also attached as a set of CSV files; don't worry about the 0.5 and 0.25 scalars on these equations, they're just there to make the random data naturally cluster into three groups.

% initialize our projected coordinates P = [-1 -1; 0 1; 1 0]; % plot the positions (c1=red, c2=green, c3=blue) colorSets = [1,0,0;0,1,0;0,0,1]; subplot(1,2,1), scatter(P(:,1),P(:,2),70,colorSets, 'x'); axis([-1 1 -1 1]); grid on; title('Initial Positions');

% initialize our set of samples S = rand(99,3)*0.25; for idxClass=1:3 idxRange = [(idxClass-1)*33+1:idxClass*33]; % helper for range S(idxRange,idxClass) = S(idxRange, idxClass)*0.5 + 0.5; % update this column end % perform the mapping with a simple matrix multiplication mapP = S*P; % plot the mapped positions subplot(1,2,2), hold on; for idxClass=1:3 idxRange = [(idxClass-1)*33+1:idxClass*33]; % helper for range scatter(mapP(idxRange,1), mapP(idxRange,2), 40, colorSets(idxClass,:), 'o', 'filled'); end hold off; axis([-1 1 -1 1]); grid on; title('Projected Features');

Here's the resulting plot with the original positions plotted as different shapes and in different colors and the mapped samples exclusively plotted as red x's. This operation is trivial to do in this environment or with a low dimensionality, but doing it in SQL isn't so bad because of it's relational indexing nature.

## mySQL implementation

Now let's move the focus to mySQL. Assume there are two tables: *features,* containing the class features of the individual samples and *projections* containing the visualization coordinates of the classes. Although the example data above is simple, we must design a schema that is generic for an unlimited number of classes. For this reason, we design the *features* table with three columns: id_sample, id_class, and score; the *projections* table also has three columns: id_class, dimension, position. Once your data is flattened to this type of structure, it is fairly trivial to execute a matrix multiple, thanks to mySQL's relational column indexing.

SELECT f1.id_sample, p1.dimension, SUM(f1.score*p1.position) as position FROM projections AS p1 LEFT JOIN features as f1 ON f1.id_class=p1.id_class GROUP BY p1.dimension, f1.id_sample

That's it. Surely all your time above wasn't a waste, right? Well, not exactly. There are several perks that were getting for free just by using the SQL code. First, we can skip sparse rows/columns because if any sample in *features* that doesn't have a row for a given class, the JOIN will skip it. We're using a left join, so the other case may not necessarily be true. Specifically, all classes that exist in the *features *table really should have a valid row entry in the *position* table as well. Don't believe me? Well, you can download the SQL or raw data files in CSV format to try it on your own.

# A Practical Example

If you've read this far, you deserve to be rewarded. As mentioned, the reason for this whole exercise is to do fast matrix multiplications online. Combined with a very smart method for graph transduction, as was done by a peer, in this presentation and through this work on cell identification and other extensions. I employed this technique on a subset of 2,000 flickr images crawled sometime in early 2009 using geo-queries and the tags "Columbia University". First, please try out the demo below, written with a bit of jQuery, mySQL, and PHP and using fused grid-color moments, gabor texture, and edge direction histogram similarity. You can get some interesting results by labeling just a few images of buildings with columns or outdoor scenes; it should be noted that the contribution of this process is the low label count required.

- Change the number of images currently viewed (ordered by decreasing relevance)
- Modify the number of k-nearest neighbors (discussed below) and see the effect on result quality.
- Click '
**random**' to randomly jump around the set and to know that the system isn't cheating - Label an image by clicking (once for positive, twice for negative, three times for unlabeled); Shift-click to view the original image from flickr
- Click '
**rerank**' when you're ready to update the model - Click '
**random**' to get more images or '**clear**' to restart the process

Sorry, if you are seeing this text the demo isn't working in your browser. Please make sure that you have javascript enabled and try opening the page directly instead of through the feed or search page. |

## Extensions For Approach

It's out of the scope of this page to discuss the technical aspects of this algorithm, so please don't ask for the complete code; it's all in the paper and presentation and more importantly, you can probably find it on the project page at the bottom. There are a few problems that may or may not be obvious from this demo and the presentation.

- Since we're now dealing with a weight matrix derived from k-NN (again, see the prior discussion), it's important to just use the top-K members. Choose a small k, generally 2-5% of the total database size is good enough (i.e. 40-100 nearest neighbors), to populate your weight matrix.
- Don't forget to make the matrix symmetric. Fortunately, this can be done easily in SQL by first creating a temporary table and updating as necessary.

Both of these simple solutions are illustrated below. In this example we have one *similarity* table with 4 columns (id1, id2, neighbor, score), one table for the user-provided labels *labels* with two columns (id1, label), and a temporary table *matmult* with 3 columns (id1, id2, symmetric) created at runtime. One important thing to note is that we create a unique index on the combination of both columns.

# compose temporary matrix for N x C CREATE TEMPORARY TABLE matmult ( rowI MEDIUMINT NOT NULL, colI MEDIUMINT NOT NULL, symmetric FLOAT NOT NULL, UNIQUE INDEX (rowI, colI) ); # populate the temporary matrix (second stroke make symmetric) INSERT INTO matmult SELECT s1.id1 as rowI, s1.id2 as colI, s1.score as symmetric FROM similarity as s1 WHERE s1.neighbor < $MAX_DEPTH$; INSERT INTO matmult SELECT s1.id2 as rowI, s1.id1 as colI, s1.score as symmetric FROM similarity as s1 WHERE s1.neighbor < $MAX_DEPTH$ ON DUPLICATE KEY UPDATE symmetric = (s1.score+symmetric)/2

The rest of the operation can be performed normally because the matrix *matmult* is now symmetric and has valid column/row entries for every available image id. Another optmization, purely for speed, would be to limit the samples that had valid *id1* or *id2* values that are within your label set.

## Post a Comment