Thursday, 23 July 2015

Efficiency with Algorithms

Recently had look at excellent talk on Efficiency with Algorithms, Performance with Data Structures , this talk has really some good content on performance.

In this blog i will share some of ideas from above talk and few things that i have learned.

Pre processing 
This is very common trick, this is trade off between processing required when actual requests comes vs time taken to pre compute some thing, some of the good examples are.

IndexOf
This is very common string operation, almost all application needs this algorithm, java have brute force algorithm to solve this but this can become bottle neck very soon , so it is better to use Knuth–Morris–Pratt_algorithm algorithm, which pre computes table to get better performance.

Search
Sequential search vs binary search is trade off between search time or pre processing (i.e sorting) time, it starts to give return if number of compare required goes over 0(log n)

Binary search is heart of so many algorithm, this reduces expensive search operation.

Many String permutation search problems are solve using this.

Hashmap in java has nice optimization when many keys with same hashcode are added, to ameliorate impact, when keys are Comparable, this class may use comparison order among keys to help break ties

Indexes
Lookup is expensive operation and binary search can't answer all types of query,so you should build specialized index for fast search, some of the options are key/value, prefix/suffix , bitmap index, inverted index etc

Pre Allocate
This is classic one, so if you know how big your data structure will be then it is better to pre allocate it rather than keep it expanding multiple times and incur cost of allocation & copy.

Array based data structure doubles capacity every time re-allocation is required and this is done to amortized allocation cost, so they do big and less frequent allocation, in java world ArrayList, HashMap etc are good example of that.

Many design pattern like Object Pooling/Fly Weight etc are based on this.

Reduce Expensive/Duplicate operation
Each algorithm has some expensive operation and to make it efficient those expensive operation should be reduced, some of the common example are

HashCode
Hash code computation is expensive operation, so this should be not be computed multiple times for given key. One way is compute it once and pass it to other functions , so recomputation is not required or pre-compute it for eg String class in java pre compute hash code to save some time.

Modulus /Divide 
This is one of the expensive arithmetic operation, bit map operation can be used to do same thing but at lower cost.
If data structure is circular for eg Circular array and want to put value in free slot then Modulus operation is required and if capacity of array is power of 2 then bit wise operation ( i.e index & ( capacity-1) ) can be used to get Modulus value and this will give significant performance gain at the cost of extra memory.  This technique is used by HashMap in java

Similarly right shift ( >>) operation can be used for divide to get better performance, but now a days compiler are smart, so you get this one for free, no need to write it.

Reduce copy overhead
Array based data structure amortized re-sizing cost by increasing capacity by 2 times , this is good but overhead of copy also comes along with it, another approach is chain of arrays, so this way you only allocate one big chunk but don't have to do copy of old value, just add this new block of memory to chain of allocated blocks.
gs-collection has CompositeFastList which is build using this approach.

Batching
Expensive operation like write to file/network/database etc should be batched to get best performance.

Co-locate data
Keeping all required data together gives very good performance based on how processor works, but this is one thing that is broken in many application.
Mike Acton talk about this in Data-Oriented Design and C++ talk in details.

Column based storage are very good analytic/aggregation use case because most of the time single column data for all rows are required to answer analytic/aggregation request.

Linear probing hash tables are also good example of data co-location.

Bit Packing is another option to keep required data very close. but should be used with extra caution because this can make code complex.

Unnecessary Work
Some of algorithm suffer from this problem most common are recursive algorithm like factorial or Fibonacci etc, both of this has duplicate work problem, which can be fixed by Memoization technique.


Simplicity & readability of code should not be lost during efficiency ride because next guy has to maintain it :-)

Thursday, 16 July 2015

BitMap sorting

Data structure is most important building block of any algorithm and it makes algorithm efficient.

Bitmap is very powerful data structure and is good fit for many problems, it is used for indexing in many database.

BitMap can be also used for sorting and for some specific case bitmap sorting can out perform may famous sorting algorithm with very low memory footprint.

Take a case that you have to sort input file that has lots of number between 1 to 10 million by and using low memory footprint and for simplicity sake assume that their are no duplicates.

What are the ways to solve this problem ?

- Classic MergeSort
This is the first solution that comes to mind whenever sorting is discussed, this will produced many intermediate files and then those will be merged.

- Muti Read & Sort
In this approach input file is read multiple times for range of data, for eg first pass will read values starting from 0 to 250K and then 250K to 500K and so on.

This solution will take less memory but too much of I/O will make it very slow .

- Amazing BitSort
Bit Sorting is really good fit for this problem, allocate bit array and turn on the bits at value index and that's it, values will get sorted as it is added to this data structure .

For eg  
{1,2,5,6,7,11} value can be represented as below

Implementation of such type of data structure is simple in C/C++ but for java bitwise manipulation is required, so to keep it simple i have implemented it using boolean array.

Code snippet 


  
It is very simple to implement this powerful data structure and little trick is required for iteration over value.

Code snippet for iteration


Conclusion
Very useful data structure with low memory foot print & very fast sorting, but it has some trade off  and you should be aware of it

- If numbers of items are less then this might not give any benefit.
- If input values are sparse then lot of space is left unused because of fragmentation.

Code is available @ github

Tuesday, 7 July 2015

Joy Inc is really Joyful

I heard closing keynote talk by Richard-Sheridan at agile singapore conference where he shared how he build joyful company.

Richard has published book Joy, Inc.: How We Built a Workplace People Love to share his experience of building amazing organization.

I got copy of book few days back and started reading it, it is very difficult to put it down once you pick it up.
I have read few chapters in last 2 days and want to share few things that i liked .

Book starts with "Let's try something new, let't run an experiment", this is very good approach to plant seed for change.

Tangible Joy
Book define tangible joy as "delivering a product or service to world that's so enjoyed , in fact , that people stop you on the street and say,"really, you did that ? i love it"

By reading this i realized that how crappy product i have build :-(

Stupid Users
It has section on stupid user and context it that software industry think users are stupid and they need Dummies book to help them successfully use our beautifully designed technology because they are dump and don't understand our master piece.

Joy will lift you and your team
He gives reason why Wright brothers were able to fly and Samuel Pierpont Langley was not

Langley was trying to build an airplan. The Wright brothers wanted to fly.

This is so much applicable to our industry, we are so much after bleeding edge technology that we forget the real problem that is supposed to be solved by technology

Lots of experiment
This book is full of experiment he did to build Joy Inc.

Pair Programming
Everybody knows benefit of pairing but still many organization are reluctant to try

Wide Open space
Wide open space with no office,cubes is very good for collaborative work, but organization has Cube culture and with every promotion individual moves away from team and get locked in private office.

Rewards System
With culture change rewards system should also change , but most of the organization never change rewards system.
I think this is one of the main topic of book, few things mention in book are.

 Now you have fair idea about what rewards system should look like

Risk of staying same
This was eye opening "The risk of staying the same had been far greater than the risk of change"

Big-Dog Discussion
Big dogs in organization are special people and by looking at them voice echos in your ear
"You don't matter as much as I do"
@menlo big decision are not made in close door conference room so that people don't feel that they are not important.

Tower of knowledge
Book as very nice section about "Mr Dave" - person on team who has vast technical knowledge that no one else have.
This is such a common problem and most of the organization falls in this trap and it is very common in Banks.
Dave is bottleneck of entire organization or department and he can't go on vacation!
Pair programming is the recommended solution for Dave and organization


Books has lots of practical advise, i highly recommend this book.

Some talks/podcast by Richard
A CEO’S JOURNEY TO CREATING A CULTURE WHERE PEOPLE CAN THRIVE
Joy, Inc. | Rich Sheridan

Thursday, 18 June 2015

Integer Encoding Magic

Question was asked to me on serializable/externalizable and why externalizable is fast , my answer was not convincing , so i spend some time on google to get more details and web has tons of information on it.

I thought of spending some time on open source serialization framework to check what type of optimization are done to reduce size.

One of the technique for integer type is use encoding, this technique is based on very simple rule.

Integer is of 4 Byte and if value is small then we don't have to use 4 byte to store it, so that way for small input values this technique can reduce size by great extent.

Some bitwise magic is required to achieve this.
I did quick compare of Raw vs Encoding and results were very good. In this test i write 1 Million integer values starting from 0 .

Without encoding it took 4019 KB for 1 Million and with byte encoding it took 2991 KB, encoding gives around 25% gain for this sample test.

Integer Encoding
Code snippet for integer encoding, it is taken from Apache Avro


Conclusion 
 Integer encoding technique gives good gain for size , but it adds overhead on write/read side, so it should be used by knowing the  side effect , such approach is very good for serialization framework because you want to reduce size of data that is sent over wire or written to some persistent store.

Coded used for this blog is available @ github



Thursday, 7 May 2015

Experiment with String Split

Splitting string based on some token is very common operation in application, in this blog i will share some options that are mostly used and types of overhead involved in it.

String.split
This is the most common approach and it looks harmless unless you look at the code!
First it creates ArrayList for storing values and then this arraylist is converted to String array

This function produces too much of garbage and it provides only one way to extract values.

String Tokenizer 
This is much better than String.split because it does not need intermediate buffer like ArrayList/String Array to hold the values, but it creates String objects for each token which adds to garbage.

One of the good thing about StringTokenizer is that it does not force caller to use fixed data structure , so caller is free to decide on what to do with values, it is just like Streaming operation.

String.split vs StringTokenizer
Lets look at some performance numbers. In this test i take below sample string

String[] values = {
        "this,is,simple,test",
        "lets,see,how,it,works",
        "this,is,very,simple,test"};

Each line is split 10 million times






















So definitely StringTokenizer is winner in this case and it is because it produces less garbage as compared to String.split but it still produces string objects.

It is possible to avoid creating those String object also by using Recycle Charsequence which is just like String but gives lots of flexibility.

Lets look at another implementation using recycle charsequence approach.


It is very simple technique to avoid intermediate string object, lets look at performance number using this approach.






















Recycle charSequence shines in this test, it is around 3X times faster than String.split an 2X times faster than StringTokenizer.

Code available @ github

Wednesday, 14 January 2015

Java8 Parallel Streams


Recently found great article on when to use Parallel Stream by Doug lea and team. In this blog i will share some performance result of different data structure.

Splittable data structure
This is one of the most important factor that needs to be consider. All the parallel stream operation are executed using default fork join pool and fork join pool uses Divide and conquer algorithms to split stream in small chunks and apply function on small chunks, so splitting is important factor and if splitting is going to take more time then all computation is going to choke!

Types of datastructure

-Array
Array based data structure are most efficient data structure from splitting perspective because each element can be randomly accessed , so splitting is very quick operation.
Some of the example of array based collections are ArrayList & open addressing based Set/Map.

-Tree
Balanced tree based collection like TreeMap or ConcurrentHashMap. It is easy to chop the collection into 2 parts, but if tree is not balanced then splitting will be not that efficient because task load will be not equal for each thread.
Another thing to consider is that memory access pattern for tree based data structure is random , so there will be memory latency cost when elements are accessed.

-LinkedList
This type of data structure gives worst splitting performance because each element must be visited to split it. Some of the samples from JDK are LinkedList,Queues

- Others
This for all others type of datastructure for eg I/O based , BufferedReader.lines which returns stream but splitting operation is very expensive.

Some performance numbers
All performance tuning guess must be backed by experiment, so lets have look at some numbers.

Some code snippet
Array Stream
ArrayList Stream
Set Stream
LinkedList Stream


Measurement Code


Result 

Array 31 sec
ArrayList 31 sec
set 52 sec
linkedlist 106 sec

Result confirms the guess that Array based collections are fastest, so choose right datastructure before you start using parallel streams.

Code used in blog is available @ github

Thursday, 25 December 2014

what is new in java8 ConcurrentHashMap

Java8 has lots of new features that will help in writing code that express its intent more clearly and in concise way.
ConcurrentHashMap has lots of nice features that reduces boiler plate code, most of the new features are parallel, so you don't have to worry about locks/threads etc when using CHM.

Smart Cache
One of the most common usage of CHM is to build cache and it has little concurrency issue that value for key should be computed once and it should provide all the "happens after" guarantee.

Pre JDK8 you have to do some thing like 


JDK8 has special function computeIfAbsent for such type of thing



Difference is obvious for prejdk8 future task is required and some other special handling to make sure it works fine, but JDK8 just call to computeIfAbsent with lamdba expression and your are done.
There is computeIfPresent function also to remap/refresh value in cache.

Merge values 
One of the most common problem with maps is when values of same keys needs to merged, one of the example is you have map of word & count.
Every time word is added to map and if it exists then count needs to be incremented

Example code pre JDK8

Example code in JDK8

All the noise that was there in JDK7 code is removed and code express its intent so nicely

 Aggregate/Reduce values

This is also very common use case where single value is derived from values in map for eg total number of words

sample code for pre JDK8

sample code for JDK8



Code is so so much clean, extra thing to note in jdk8 reduce function is first parameter which is the (estimated) number of elements needed for this operation to be executed in parallel.
So this function supports parallel reducing

Iteration
All the collection of JDK8 has done improvement on iteration method, so application code will never have any loop, it is all controlled by underlying collection and due to which iteration can be done in parallel .

CHM has 2 types of foreach

others interesting functions are search

One thing to note is that all the parallel function are executed on common forkjoin pool, there is way by which you can provide your own pool to execute task.