One of the weakest side of the la4j library - its documentation. I would really appreciate any help in this direction. It's not that difficult to fill the gaps in current documentation. It just requires some time and motivation on contribute to OSS project. So, if you folks, were always dreaming of OSS contribution - drop me the message @vkostyukov. We will work it out.
Monday, January 27, 2014
Sunday, January 26, 2014
Vector distance and norm in la4j
The la4j API allows you to calculate the vector norm easily with appropriate vector accumulator. There are three norm accumulators available: Euclidean, Manhattan and Infinity. So, you can use them as
fold
arguments in a way like this:Vector a = new BasicVector(new double[]{ 1.0, 2.0, 3.0 }); doble norm0 = a.fold(Vectors.mkEuclideanNormAccumulator()); // Euclidean Norm doble norm1 = a.fold(Vectors.mkManhattanNormAccumulator()); // Manhattan Norm doble normMax = a.fold(Vectors.mkInfinityNormAccumulator()); // Infinity NormWhen norm is available, the distance between two vectors can be calculated as:
Vector a = new BasicVector(new double[]{ 1.0, 2.0, 3.0 }); Vector b = new BasicVector(new double[]{ 4.0, 5.0, 6.0 }); // the distance between vectors 'a' and 'b' in terms of Manhattan space double distance = a.subtract(b).fold(Vectors.mkManhattanNormAccumulator());You also can use norm in order to normalize the vector's components:
Vector a = new BasicVector(new double[]{ 1.0, 2.0, 3.0 }); double norm = a.fold(Vectors.mkEuclideanNormAccumulator()); // normalize 'b' in terms of Eucludean space Vector b = a.divide(norm);
Saturday, January 25, 2014
New release: v0.4.9!
The new version of la4j (Linear Algebra for Java) is available: 0.4.9! This awesome release is made through the effort of active contributors: Michael, Phil, Anveshi, Clement, Miron and Todd. Together, we pushed 95 new commits and made a significant progress with big sparse matrices. Here is the full list of changes:
- Bug fix in
align()
method for big sparse matrices (reported by Michael Kapper) - Bug fix in
growup()
method for big sparse matrices (contributed by Phil Messenger) - Bug fix in
MatrixMarketStream
- New matrix method
select()
(contributed by Anveshi Charuvaka) - New vector method
select()
- Bug fix in
growup()
method for the case with positive overflow (contributed by Clement Skau) - New matrix predicate
Matrices.SQUARE_MATRIX
(contributed by Miron Aseev) - New matrix predicate
Matrices.INVERTIBLE_MATRIX
(contributed by Miron Aseev) - New vector method
norm(NormFunction)
that implements p-norm support (contributed by Miron Aseev) - New matrix predicate
PositiveDefiniteMatrix
(contributed by Miron Aseev) - Bug fix in
each
,eachInRow
,eachInColumn
methods of sparse matrices (reported by Leonid Ilyevsky) - New matrix methods:
foldColumns
andfoldRows
(contributed by Todd Brunhoff) - New matrix methods:
assignRow
andassignColumn
- New matrix methods:
updateRow
andupdateColumn
- New matrix methods:
transformRow
andtransformColumn
Don't forget to update your Maven artefacts and feel free to touch me with feedbacks/questions at @vkostyukov!
Tuesday, October 1, 2013
The la4j-0.4.5 has been released!
I am happy to anounce that la4j-0.4.5 has just been released!
I feel so proud about this release. We did a fantastic job together: 7 developers pushed 292 commits to master branch and much more:
- 35 issues has been closed
- 30 pull request has been merged
- +262 new tests: 581 in totall
- +6k LOC: 22k in totall
All the major functional and performance issues has been fixed. The la4j did a great progress according to the previous versions. We did our best and hope you will enjoy new version!
References
http://la4j.org/ - project web page
https://github.com/vkostyukov/la4j - GitHub page
https://github.com/vkostyukov/la4j/releases/tag/v0.4.5 - GitHub's release page
Tuesday, September 10, 2013
Meet the new la4j contributors
The la4j now almost ready for 0.4.5 release. All our current activities are dealing with testing. We want to make new version as stable as possible. For this we have already added 150 new tests and planning to add 50-100 more. Thus the new release will contain ~500-550 tests in total, which is the great progress according to the previous release.
Traditionally, I would like to thank two guys: Yuriy Drozd (Kiev, Ukraine) and Maxim Samoylov (Saint-Petersburg, Russia) for being such an active and valuable contributors of la4j. These guys did a wonderful job to bring the la4j to current state. Yuriy has fixed a critical issue in 'swap' operation for sparse vectors and added a new decomposition support to la4j.
Maxim joined the project two months ago and started working on stability of existing code. Hi has fixed a very important issue in Eigen dcompositor; added new interator (iterating through non-zero values, iterating through the values in row/column) methods to matrix class. Maxim has also spent tons of time evaluating double-rounding issues in la4j (in "rank" and "determinant" methods). So, Maxim now is the most active contributor, for sure. He made 56 commits (+2000 LOC) to repository and got the second place (right after me) in contributors list (see picture bellow).
Thank you again, guys! I really appreciate your help!
Thursday, August 1, 2013
SODD - Stack-Overflow Driven Development
I use this methodology in the la4j. The idea is quite a simple:
1. Go to SO and find the questions related to your project (tags "matrix", "java" for la4j).
2. Try to understand what exactly users want to do. And how do they want to do that.
3. Implement new functionality according to the user's questions.
Here is the good example of the question that forced me to add new functionality - accumulator functions.
1. Go to SO and find the questions related to your project (tags "matrix", "java" for la4j).
2. Try to understand what exactly users want to do. And how do they want to do that.
3. Implement new functionality according to the user's questions.
Here is the good example of the question that forced me to add new functionality - accumulator functions.
Wednesday, July 24, 2013
A Parallel Linear Algebra Library
When I started the la4j project, I didn't think that it will be one of the most popular packages for Linear Algebra in Java. So, in two years the library changed a lot. And it continue changes from day to day. I'm so glad seeing people fixing bugs/proposing pull-request/sending feedbacks. Which means, that I need to make this library more and more awesome. And I have a plan to bring the la4j into the parallel world.
This plan conststs of two major steps. First, I need to design a high-level idea of incapsulating the engine of the library, whcih knows how to deals with matrices and vectros in a more efficient way. The engine should use all the advantages of concreet matrix type and don't use its disadvantages. The simpliest example - do not iterate through zero elements in CRS/CCS matrices while computing something that doesn't require handling zero values. I hope, it can be done in realease 0.5.0 at this winter. And I believe that this version will show just incredible performance accoring to the 0.4.5, because all these optimizations with sparse matrices. But it is still wll be a single-threaded version of la4j's engine. Second, I'm planing to develop an additional implementation of engine, that will handles all tasks in parallel - a parallel engine. And I'm really exited about new features of Java's Fork-Join Framework. It just a perfect base/tool for solving this kind of tasks. So, A parallel engine will be avaliable at version 0.6.0 at next summer.
The F-J Framework is an awesome and well-tuned cuncurrent framework for Java. And it allows to the developer use data-parallelism in his code. This is just perfect for matrix operations.
PS: I'm planning to keep the la4j's API unchanged. This is my goal.
If someone wants to participate in discussion the engine's design, go there.
This plan conststs of two major steps. First, I need to design a high-level idea of incapsulating the engine of the library, whcih knows how to deals with matrices and vectros in a more efficient way. The engine should use all the advantages of concreet matrix type and don't use its disadvantages. The simpliest example - do not iterate through zero elements in CRS/CCS matrices while computing something that doesn't require handling zero values. I hope, it can be done in realease 0.5.0 at this winter. And I believe that this version will show just incredible performance accoring to the 0.4.5, because all these optimizations with sparse matrices. But it is still wll be a single-threaded version of la4j's engine. Second, I'm planing to develop an additional implementation of engine, that will handles all tasks in parallel - a parallel engine. And I'm really exited about new features of Java's Fork-Join Framework. It just a perfect base/tool for solving this kind of tasks. So, A parallel engine will be avaliable at version 0.6.0 at next summer.
The F-J Framework is an awesome and well-tuned cuncurrent framework for Java. And it allows to the developer use data-parallelism in his code. This is just perfect for matrix operations.
PS: I'm planning to keep the la4j's API unchanged. This is my goal.
If someone wants to participate in discussion the engine's design, go there.
Subscribe to:
Posts (Atom)