Hi, me again.

I'll add this one and then shut up for a while.

A problem I came across at work (it's for a medicinal chemistry-related thing), and our informaticians are working on it. But I'd like to have some more opinions.

Say you have a matrix whose elements are either 'blank' or 'experimental data', where the latter is a positive floating point number. There is at least one number in each row and column, i.e. there are no completely blank rows or columns.

The only operations you're allowed are swapping rows and swapping columns.

What you want to achieve is a distribution of the data such that the smaller numbers are closer to the top left corner of the matrix (I'll call it 'origin'), and the bigger numbers get thrown far from that.

Of course, depending on what the values are, you might get the odd big number close to the 'origin' and the odd small number far from it, but the aim would be to minimize that. So the data should be distributed in 2D around the origin according to their increasing values.

My first attempt at solving it was based on a 'centrifugal' concept, i.e. I calculated the product of each data cell by its distance from the origin, and I tried to identify the pair of row-swapping and column-swapping matrices that, applied to my matrix, resulted in the maximization of the sum of those products.

But they told me the computational cost of this was too high, and didn't take into account the presence of blank cells, that counted as zero, whereas they should not be included at all in the computation.

Now we're evaluating other methods based on weighed row-wise and column-wise averaging, but I'm not sure we're in view of any meaningful solution.

What do you people think? Does it make any sense at all?