No F*cking Idea

Common answer to everything

java.util.concurrent Goodies

| Comments

JVM as a virtual machine is great. I’m not in love with java as i think too many {} can kill anyone regardless of faith. It was really really long time since i wrote anything for the blog and today as it is “summer bank holiday” i decided to finally sit down and write few things that are interesting.

java.util.concurrent

Even if i will be talking about java.util.concurrent i will give few examples using Scala. I like Scala and i think it is much easier to understand and read than java. Simply less tokens. And less tokens mean more fun.

I don’t know if explore how much nice things there is in JVM, one of thme is truffle (added in java 8 but i will not be talking about it.). One of the great things is called java.util.concurrent this set of tools/lib gives us tools to work with concurrency.

In times of agents such a set of tools could feel a bit outdated but still they can gives us valuable lessons about concurrency and maybe be useful in present/future.

Abstracts

So as we all know java is full of design patterns and one of the first things you will not while looking into docs are abstract classes. This kinda gives us overview of what we can expect in the package. Just like a movie teaser but… boring :). First thing we notice while looking at http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/package-summary.html is probably BlockingDeque and BlockingQueue. And this is out first example.

ArrayBlockingQueue

If you ever worked with threads or any concurrent constructs you know how useful are channels/queues. First concrete class in the package is ArrayBlockingQueue[T] which lets us construct queues. For those who don’t know what queue is, it is a FIFO construct, FIFO means First In, First Out. So elements that get in first will be picked up at the receiving end of queue before rest. It is like a queue for tickets before a big summer blockbuster release.

Let us try this ArrayBlockingQueue out:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import java.util.concurrent._
import scala.util.control.Breaks._

object Example {
  val queue = new ArrayBlockingQueue[Int](100)


  val producer1 = new Thread(new Runnable {
    def run() {
      (0 to 1000).map( n => {
        while(!queue.offer(n)){}
      })
    }
  })

  val consumer1 = new Thread(new Runnable {
    def run() {
      breakable {
        while(true){
          val result = queue.take()
          print(result.toString() ++ ",")
          if (result > 999){
            break
          }
        }
      }
    }
  })

  def main(args: Array[String]): Unit = {
    consumer1.start
    producer1.start
  }

}

What are we doing here ? We simply demonstrate a producer and consumer type of situation.

There are few things to look at, first initialization

1
val queue = new ArrayBlockingQueue[Int](100)

where we create our queue with a total capacity of 100, this can be skipped for no capacity but this could be risky in terms of memory. So we wanna omit unpredictable parts of code.

How to add stuff to the queue

1
  while(!queue.offer(n)){}

Why like this ? lets not forget it is a blocking queue so once it will fill the capacity it will block. If the queue is full offer method will return false and the element will not be added tot he queue thats why we have to retry this. Of course in this case it might not be the perfect example as it will grind CPU until it can add it to the queue. So maybe adding Thread.sleep(50) sleep for 50 miliseconds could be good here.

Now lets look at consumer, here the job is simple we use take, this will simply block if we can get anything from queue and wait. In most cases this is the behavior we want. Thread simply sitting there and waiting for something to appear in the queue.

There is also option to use add function to add stuff to the queue but this will trigger an exception in case queue is full and i’m not a big fan of handling exceptions in this type of scenarios.

More info about ArrayblockingQueue api can be found here http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ArrayBlockingQueue.html

ConcurrentHashMap[K,V]

Concurrent Hash Map lets you use a single dictionary / hash by many threads. This is great as it makes all the synchronization work for us. Of course often writes/updates by many threads will make it perform very very slow, but if we can eg. use it as a form of reduction result that would be great simplification.

If we will use it like this

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
import java.util.concurrent._

case class Mapper(key: String, times: Int, hash: ConcurrentHashMap[String, Int]) extends Runnable {
  def run(){
    val sum = (1 to times).sum
    hash.put(key, sum + hash.get(key))
  }
}

object Example {

  val resultHash = new ConcurrentHashMap[String, Int]()

  def main(args: Array[String]) : Unit = {

    val threads = Array(
      new Thread(Mapper("one", 1000, resultHash)),
      new Thread(Mapper("two", 1000, resultHash)),
      new Thread(Mapper("one", 1000, resultHash))
    )

    threads.map(_.start)
    threads.map(_.join)

    print("Key 'one' => " + resultHash.get("one").toString + "\n")
    print("Key 'two' => " + resultHash.get("two").toString + "\n")

  }

}

Of course it will work but it will often cause troubles, this code is racy :D and often it will end up with same results for both one and two even if it is synced. Well we now know we can use this structure from any number of threads but to make it work it would be more useful to create another thread that would be reducing values or simply have a queue where we put in partial results and a single thread that is updating the hash. Still this can have some use, if you have a one reducer that is updating this hash or simply many different reducers updating dedicated key spaces while many other threads are simply using this hash in readonly mode. The big issue is when you want to update it as it doesn’t support transactions and what you really want to do here is a transaction.

Atomic variables

Well we all love simplicity of a single variable, and in concurrent env it is simply easy to forget about goodies of sequential world and use a raw variable to store results of some execution.

Let us write some dodgy code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import java.util.concurrent._

case class Counter() extends Runnable {

  def run() {
    (1 until 1000).map( n => Example.counter = Example.counter + n)
  }

}

object Example {

  var counter: Int = 0

  def main(args: Array[String]) : Unit = {

    val t = new Thread(Counter())
    val t2 = new Thread(Counter())
    t.start
    t2.start
    t.join
    t2.join

    print(counter)
  }
}

Result should be 999000 but… you will get stuff like 907369… This happens because of both threads randomly reading and updating with garbage same val. Thats why we need atomic values :) lets convert it into less dodgy thing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import java.util.concurrent.atomic._

case class Counter() extends Runnable {

  def run() {
    (1 until 1000).map( n => Example.counter.addAndGet(n))
  }

}

object Example {

  var counter: AtomicInteger = new AtomicInteger()

  def main(args: Array[String]) : Unit = {

    val t = new Thread(Counter())
    val t2 = new Thread(Counter())
    t.start
    t2.start
    t.join
    t2.join

    print(counter.get())
  }
}

After adding AtomicInteger and changing how we update to atomic updates we always get same results and it is the correct answer. It doesn’t look good yet because of this Example.counter but that is just an example.

A lot more…

There is a lot more in this awesome package to cover, i will cover one more thing next time and that are Cyclic Barriers for better synchronization of threads but for now this is it :). I hope this was a useful read. I don’t have much time to play with Scala so if something looks “too simple” :D yeah i’m not a scala expert.

Cheers!

Shortest Way to Work With Json in Haskell

| Comments

Data.Aeson is a great package for working with json data in Haskell but you can make it work even in fewer lines of code.

if you will use DeriveGeneric from GHC and GHC.Generics in your module you can parse stuff super easy ;O.

This is my example that explains how to use it.

json_example.hs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
{-# LANGUAGE OverloadedStrings, DeriveGeneric #-}

import Data.Aeson
import GHC.Generics

data Profile = Profile {
  age       :: Int,
  isNice    :: Bool
} deriving (Show, Generic)

data User = User {
  id        :: Int,
  name      :: String,
  profile   :: Profile
} deriving (Show, Generic)

instance FromJSON Profile
instance ToJSON Profile

instance FromJSON User
instance ToJSON User

main = do
  let profile = Profile 29 True
  let user = User 1 "Kuba" profile
  putStrLn $ show user

  -- encode will give you back ByteString

  let json = encode user
  putStrLn $ show json

  -- decode will give you back Maybe

  let parsedUser = decode json :: Maybe User
  case parsedUser of
    Just newUser -> putStrLn $ show newUser
    Nothing -> putStrLn "Sorry mate this is not happening"

result

result
1
2
3
User {id = 1, name = "Kuba", profile = Profile {age = 29, isNice = True}}
"{\"id\":1,\"name\":\"Kuba\",\"profile\":{\"isNice\":true,\"age\":29}}"
User {id = 1, name = "Kuba", profile = Profile {age = 29, isNice = True}}

Here i made shortest possible example to show off how you can work with Aeson. First of all if you will use Generics you don’t have to write real implementation of ToJSON and FromJSON GHC will do this or you!

Only thing to remember is that encode will give you back ByteString and decode will give you Maybe A and thats it.

You always can fallback to normal way of describing FromJSON and ToJSON :)

Versioning of Your Code and API

| Comments

People that deal with evolving applications in ‘the wild’ are often hit with quite a tricky question. How do you version your api and the code behind it? We had similar issue where I work and i will present my idea on the topic.

Motivation

Different clients requests features, you upgrade your codebase and mvoe along for example streamlining api. You need a way to version the code so old clients have time to move work on upgrade and new clients can use new api without having problems.

Restful API

This is the first part and its easy so i will be fast on this. In my opinion using subdomains/CNAM’es for versions like

v1., v2., v3.yourdomain.com or 20012013.yourdomain.com is the best way to handle changes in API from client side. As we spoke internally using headers or any other thing can make clients go mental because they are using other type of software and some change could be not trivial. Yes actually people have this type of problems with very old bash/perl systems.

But why subdomains are cool in my opinion will unroll in next section.

But what about the code ?

Most important thing is how easy it will be for developer to add things without breaking other things. Yes this is the problem! Most people will think that rolling a solution with some sort of scoping, inheritance is ok.

eg.

1
2
class V2::People
class V1::People

No its not. This is actually sh*t. Why ? Even with good code test coverage you still have issue becasue each of this class is using other classes basically sharing them. And changes in their code can affect other version of api. And with V1 and V2 it is managable but if you have 5-9 versions its starts to be crazy.

How to solve it ?

Imagine a queue of versions in form of deployed boxes.

My idea is very simple. You simply tag version codebase and deploy new version on new box while pointing new subdomain to new app.

What do you gain ?

  • You are sure you can deploy from 0 your app in production in isolated environment BIG WIN
  • You know you can upgrade OS/packages in your production app with each release BIG WIN
  • You know how many people are still on some version
  • Your developer can develop in new version just like in any other app without caring for compatibility issues

We live in age of cloud deploy so spinning new instance is not that expensive!

There is only one point you have to be very aware, that is data persistence. If you are using SQL you can only ADD COLUMNS/TABLEs never remove anything. But in most thing you will do exactly this anyway.

Version deploy’s on virtual machines example: Image of V'm boxes in the web

Ok that was fast lets recap it

Process! how would you implement this in real life ? Simple!

When you have an app and you deploy it using eg. puppet + capistrano. When you deploy first version you make a tag in git with name eg version-1 or 20032014-deploy and deploy it to a box assigning a CNAME.

Next you start working on new version and when you will get new version you tag it 21032014-deploy and deploy to a new box. This must contain “build whole box script” in puppet or if you are using docker. This way if you eg. added redis to stack you need to be sure your production deployment scripts are ready. This makes you be focused on keeping your “production ready setup” always up to date.

And after deploy you move along and work on new version. When you need to decomision old version you just kill the box. Also each version should be monitored how many requests it actually gets to the API because if you will have 20 versions up and some are getting 0 traffic you can kill them.

Example where many versions use same db “ring” / “cluster”: V'ms using DB

Rollback ?

How do you rollback ? simply checkout deployment tag and deploy :)

The gain ?

The gain in this strategy is isolation, if you need more boxes for your main version you have ready production scripts so all you have to do is to spin them and be sure that CNAME is load balanced.

Summary

I don’t think this strategy has any problem / hidden traps. You get smaller codebase to work with, ability to upgrade your production boxes and application libs/frameworks as you always move forward and deploy from scratch. You don’t do upgrade-production box deploy but fresh deploy. Also fresh deploy can be smoke-tested by tester before putting it into production. Deployment scripts makes you by default ready to scale your app horizontally.

Again What you do:

  • start: Deploy version to a fresh box
  • Upgrade Code/OS and add features
  • Tag new version
  • goto start:

IMHO everyone should move to this type of strategy in context of api versions.

Cheers – Jakub Oboza

Golang New & Make

| Comments

When i first started playing and learning Google GO one if the first things i noticed was new and make. At first glance they seemed to be doing same thing. There is a difference and it actually is quite easy to explain.

Documentation

If we will go to the golang doc page under http://golang.org/pkg/builtin we can see every builtin function in go. Also new and make. From new we can read.

“The new built-in function allocates memory. The first argument is a type, not a value, and the value returned is a pointer to a newly allocated zero value of that type.”

similar on make.

“The make built-in function allocates and initializes an object of type slice, map, or chan (only). Like new, the first argument is a type, not a value. Unlike new, make’s return type is the same as the type of its argument, not a pointer to it.”

So we can see that new return pointer to a type and make returns an allocated object of that type. Now this is a difference.

So how can we implement a simplified new ?

1
2
3
4
5
func newInt() *int {
  var i int
  return &i
}
someVar := newInt()

this is just like we would do someVar := new(int).

In case of make we can only use it for map, slice and chan.

“Slice: The size specifies the length. The capacity of the slice is equal to its length. A second integer argument may be provided to specify a different capacity; it must be no smaller than the length, so make([]int, 0, 10) allocates a slice of length 0 and capacity 10. Map: An initial allocation is made according to the size but the resulting map has length 0. The size may be omitted, in which case a small starting size is allocated. Channel: The channel’s buffer is initialized with the specified buffer capacity. If zero, or the size is omitted, the channel is unbuffered.”

make creates and allocates all the memory. We can specify size of the element we want in the second parameter but this only works for slice and chan. Map is a special type that doesn’t need size.

And make is the only way to create this objects.

Summary

new is a way of getting pointers to new types while make is for creating channels, maps and slices only.

Dogecoin Mining on Raspberry Pi LOLMODE

| Comments

!!!UPDATE: Download cpuminer and build from tar http://sourceforge.net/projects/cpuminer/ git HEAD is broken and will not let you run configure.

If you have 800 Mhash/s GPU (Graphics card) i’m sure you often think about making your raspberry pi a dogecoin miner. Because why not :D ? It is not effective i warn you :) you will get around 0.34 khash/s that is about 2000 times less than your GPU :) and about 500 times less than CPU on your box. But its easy and fun.

doge meme

I mainly did it for fun to see how it will react, work and what is the possible heat problem.

Again Why ?

Because DOGE, doge coin is THE NEW BLACK. Its future! An Irony on all crypto currency :) and its value is based on meme, laugh and happiness. This is much better than bitcoin :> at least for me

Connect to your raspberry pi

You will need:

requirements
1
2
3
4
  λ automake
  λ gcc
  λ git-core
  λ libcurl

if you are using like me 2014-01-07-wheezy-raspbian raspbian you will have everything ready :) one thing to install is automake and you can do this by typing apt-get update followed by apt-get install automake this is all you need.

lets get on the box! (default login/passowrd for this image is pi:raspberry)

1
2
10:54 kuba@pc12:~ λ ssh pi@192.168.1.22
pi@raspberrypi ~ $

(This ip address is just an example :D you will need to have a way to find it in your network.)

Now all you need to do is to clone CPU miner for it.

1
git clone https://github.com/pooler/cpuminer.git

This will download your mining software on the raspeberry pi, next we need to compile it and run!

compiling
1
2
3
4
5
6
pi@raspberrypi ~/Workspace $ cd cpuminer-2.3.2/
pi@raspberrypi ~/Workspace/cpuminer-2.3.2 $
./autogen.sh
... script will take some time ...
./configure CFLAGS="-O3"
make

This will compile and build the minerd binary that is ready to start mining :). Well you need to do one thing, join a doge pool. I’m not gone go into details of solo mining vs pool minnig :) i’m just a simple miner :D

If you need more info on mining pools you should check this topic http://www.reddit.com/r/dogecoin/comments/1tn8yz/dogecoin_mining_pool_list/

I personally at the time of writing this post i’m using small pool called chunky pool :).

Starting to mine!

Now we have software lets actually mine something :). You will need to create a shell script that will start minerd on your raspberry pi. My looks like this

run.sh
1
./minerd --a scrypt --threads 1 -o stratum+tcp://pool.chunky.ms:3333 -O dogedogedoge.pi:password

make it executable and run! Yay you are a dogecoin farmer now! CPU mining is not the most optimal but hey..its all just for lolz :)

Heat problems ?

For me 2 hours of mining on raspberry pi did not generate any extra heat or stuff like this, seems to be stable. I was worried it will got nuts on this topic but i was proven wrong.

Summay

This sucks in terms of speed, you will get close to none speed because below 2 khash/s its not even worth it, you will not be on any stats on any pool even listed, periodically you will actually hit the jackpot and get a success pimping you to 2khash/s for about 60 sec but that is just a lie, you scored a win and this will give you some parts of doge. Last i checked you were able to get around 0.67 doge per hour of your raspberry time. That is really really bad as pretty basic GPU pimps you to 600+

Cheers :) Hope it helps! Much fun, so currency.

Working With URL Quick Tip Network.Url

| Comments

I have bigger text to post but before i will do it i have to split it into smaller parts so it will not be one long post about everything.

cat explaining stuff

Working with URL’s

Most of the time when preparing to make an http request in haskell eg. using simpleHTTP we need to build a request. We have several ways to do it, one of them would be to ugly glue strings together but thats not the way to do it in a safe way. Happily for us we have url, (cabal install url) package that adds Network.Url package. And here i will show few quick tips how to use it to work with urls.

Step one: Import

First thing we have to do is to import our package :D and string into our url lib.

start.hs
1
2
3
4
5
6
7
8
9
10
import Network.URL

main = do
  putStrLn "give me some url:"
  rawUrl <- getLine
  let maybeUrl = importURL rawUrl in
    case maybeUrl of
    Just url ->
      putStrLn $ exportURL $ add_param url ("param_name","param_value")
    Nothing  -> putStrLn "Sorry but this doesn't look like url"

This is the super simple example. But how it work? First of all we have importURL with signature:

1
importURL :: String -> Maybe URL

This will import url in form of string to url library and give us back Maybe URL. This is awesome! So we will have type that we can work on Yay! To exit library and get back string. We need to use exportURL with signature:

1
exportURL :: URL -> String

So we are only doing some simple transformation String ~>~ Maybe URL ~>~ URL ~>~ String thats nothing we can’t handle!

Next important bit is add_param function with signature:

1
add_param :: URL -> (String, String) -> URL

This does exactly what we would expect :D If we need to add to url http://google.com two params ok=1 and query=haskell To build http://google.com?query=haskell&ok=1.

Step two: More detailed example

I will try to reiterate our first example showing a bit more thing. Or just same things in a different way. Lets try to add two params.

ue2.hs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import Network.URL

prepareUrl url =
  let newUrl = add_param url ("query","gimme cats pics") in
    add_param newUrl ("size", "any")

main = do
  putStrLn "give me some url:"
  rawUrl <- getLine
  let maybeUrl = importURL rawUrl in
    case maybeUrl of
    Just url ->
      putStrLn $ exportURL $ prepareUrl url
    Nothing  -> putStrLn "Sorry but this doesn't look like url"

You should run the code and see something like this:

ue2.hs
1
2
3
4
 λ ./ue2
give me some url:
lambdacu.be
lambdacu.be?size=any&query=gimme+cats+pics

Summary

It is just a quick tip :) Network.URL has few more functions eg. to check if protocol is secure and checking if params are ok. But stuff showed above is the main point of lib.

More about this lib ofc on hackage: http://hackage.haskell.org/package/url-2.1/docs/Network-URL.html

….And quick tip should be quick :)

forkIO and Friends

| Comments

This post is sponsored by forkIO function and newChan in Haskell

Catz

What is this forkIO ?

forkIO is part of Control.Concurrent package and as it says it:

Sparks off a new thread to run the IO computation passed as the first argument, and returns the ThreadId of the newly created thread. The new thread will be a lightweight thread; if you want to use a foreign library that uses thread-local storage, use forkOS instead.

This is very neat if your program wants to use all the cores of your CPU or at least be more responsive not waiting for stuff to happen.

forkIO type is:

1
forkIO :: IO () -> IO ThreadId

Channels to help !

forkIO would be enough to start working on stuff but to make a real use of them we need a way of communicating with our threads. This actually opens design of our code to new stuff like building workers. There are other ways of communicating with threads like mVar but IMHO channels win hard.

Channels are part of Control.Concurrent.Chan package and are typable! Typed communication Yay!

Channels functions we need have the following type signatures:

1
2
3
newChan :: IO (Chan a)
writeChan :: Chan a -> a -> IO ()
readChan :: Chan a -> IO a

And that’s actually all we need. Let’s make some stuff working.

Ok So lets get this party started :)

I think most of the time it’s better to explain stuff on examples.

Just spawn!

First thing we want is just to spawn!

just_spawn.hs
1
2
3
4
5
6
import Control.Concurrent

main = do
  forkIO $ do
    putStrLn "Yay! i'm in thread!"
  putStrLn "I'm important i'm in Main thread!"

This is a very simple way to spawn a light weight thread via forkIO :>. As you can see it is a normal action so you can go dirty!

forkIO takes actions and give you back IO ThreadId so you can keep track / kill threads you don’t like

Just Spawn

Previous example was a bit cheating as it showed nothing really important so lets make some crazy threads printing stuff now.

crazygals.hs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import Control.Concurrent

fanOfGarbage = do
  putStrLn "Garbage is best bad evar!"
  fanOfGarbage

fanOfClassicMusic = do
  putStrLn "Dude Garbage is garbage"
  fanOfClassicMusic

main = do
  putStrLn "hit it guys!"
  forkIO fanOfGarbage
  forkIO fanOfClassicMusic

Well compiling and running this gives you only “hit it guys” as main thread exits and child threads dies! lets fix it so we can say when they need to stop!:>

crazygals2.hs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import Control.Concurrent

fanOfGarbage = do
  putStrLn "Garbage is best bad evar!"
  fanOfGarbage

fanOfClassicMusic = do
  putStrLn "Dude Garbage is garbage"
  fanOfClassicMusic

main = do
  putStrLn "hit it guys!"
  forkIO fanOfGarbage
  forkIO fanOfClassicMusic
  getLine
  putStrLn "Thank You Sir for stopping them!"

After launching it you can see how each thread is spamming its prints ;> So it work until you will hit enter. Cool so we have something working.

How does it work ? first of all we use forkIO to spawn threads and this time we have each “thread” function in separation. Each of them run forever like crazy music fans :). In this place we can make it simple by using forever from Control.Monad to make it simpler.

crazygals3.hs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import Control.Concurrent
import Control.Monad (forever)

fanOfGarbage = do
  forever $ do
    putStrLn "Garbage is best bad evar!"

fanOfClassicMusic = do
  forever $ do
    putStrLn "Dude Garbage is garbage"

main = do
  putStrLn "hit it guys!"
  forkIO fanOfGarbage
  forkIO fanOfClassicMusic
  getLine
  putStrLn "Thank You Sir for stopping them!"

forever is part of Control.Monad as name says it is doing action forever ;) useful for stuff like workers or stuff that has to happen all the time it type is forever :: Monad m => m a -> m b.

Something useful, add channels

Cool so now we have some basics how to spawn a thread using forkIO but to have something that we actually can use in real life we need to have some sort of communication. I wanna present something i feel would be useful in almost every haskell program. Channel combined with forkIO.

If you programmed ever in Erlang or Go you will know what i’m talking about, channels are very similar to message passing. Basically it is a pipe that you can write to or read from in different threads/processes. This is one of the mechanism we can use to get data out of other threads. Because they are not sequential we can’t predict normal vals or time when they will be ready. One of the ways of getting response from threads are channels.

Channels are amazing because they are flexible :) And very natural. Basic principle is simple you write in one thread to the channel and read in other :)

But lets make an example that will show how powerful it is.

chanz.hs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import Control.Concurrent
import Control.Monad (forever)
import Control.Concurrent.Chan

gossipGirl chan = do
  forever $ do
    gossip <- readChan chan
    putStrLn gossip

main :: IO ()
main = do
  putStrLn "Lets do some gossips"
  gossipChan <- newChan -- lets make new chan
  forkIO $ gossipGirl gossipChan -- spawn gossipGirl
  writeChan gossipChan "Garbage is garbage!"
  writeChan gossipChan "Garbage is garbage for reals!"
  getLine
  putStrLn "Thank You Sir for Info"

Nice! What happens here :) So new things are newChan that create channel which we will use to talk to our gossipGirl. readChan reads data from channel and writeChan writes stuff to channel. This is very simple :) So now lets generalize our worker into something that we can use in next mini tutorials. A worker.

A Worker

simple worker will take 1 channel as parameter and spawn thread this will help us in understanding how this whole thing works. (ofc if we don’t got it by now :) )

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
import Control.Concurrent
import Control.Monad (forever)
import Control.Concurrent.Chan

worker chan foo = do
  forkIO $ forever $ foo chan

worker2 action = do
  forkIO $ forever action

gossipGirl chan = do
    gossip <- readChan chan
    putStrLn gossip

main :: IO ()
main = do
  putStrLn "Lets do some gossips"
  gossipChan <- newChan -- lets make new chan
  gossipChan2 <- newChan -- lets make new chan
  worker gossipChan gossipGirl -- spawn gossipGirl

  writeChan gossipChan "Garbage is garbage!"
  writeChan gossipChan "Garbage is garbage for reals!"

  worker2 (gossipGirl gossipChan2) -- woker2 2 girl!
  writeChan gossipChan2 "Umkay"
  writeChan gossipChan2 "Yez!"

  getLine
  putStrLn "Thank You Sir for Info"

Yes you can build workers as you want. I would not spend time on trying to build uber generic worker as it is usually custom and you don’t need to spend much time to make one :). Usually you can have worker types for particular tasks eg. databasesWriter, logWriters, counters etc.

Now why would you all this forkIO stuff ? here is there reason. Cat simulation!

cat.hs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import Control.Concurrent
import Control.Monad (forever)
import Control.Concurrent.Chan

data AskForMeow = GibFood | Smile

meowMe chan chanBack = do
  niceTry <- readChan chan
  case niceTry of
    GibFood -> writeChan chanBack "Meow"
    Smile   -> writeChan chanBack "No"

cat action = do
  forkIO $ forever action


main :: IO ()
main = do
  putStrLn "Hey kitty kitty"

  foodInputChan <- newChan
  catOutputChan <- newChan

  cat $ meowMe foodInputChan catOutputChan

  writeChan foodInputChan Smile
  response <- readChan catOutputChan
  putStrLn response

  writeChan foodInputChan GibFood
  response' <- readChan catOutputChan
  putStrLn response'

  getLine
  return ()

Summary

I hope this gives a little insight into forkIO and channels functions as You should use them in your code. It is super simple to add them, they work miracles and i love them. Yes you don’t need to be expert on Kleisli arrows to use them ;).

Cheers!

Erlang E17 and Second Edition of Programming Erlang

| Comments

Today I got my hands on Programming Erlang 2ed. And i have to make a spoiler now. The book is great. I remember the times when i was reading “Programing Erlang” and it was a great book. I enjoyed it and it made me get into erlang I was a really happy person.

This book is talking about R17. From what i read R17 should be named Erlang 2.0 the changes are just amazing.

My face after reading changes…

Vomiting Rainbow

Page 75 Maps

This chapter hits you in the face! I don’t have yet E17 to check more stuff but this looks amazing. Ok lets have a look. In Erlang E17 they are introducing MAPS. Also known as key-val’s / hashes / assoccs / dictionary. Basically a data structure that lets you store a value with a key and retrieve the value if you know the right key.

Lets have a look at the syntax.

maps
1
λ  Kv = #{a => 1, b => 2, c => 3}.

This creates a map with 3 elements a,b and c. Easy. Of course maps are immutable data structures so if you want to add stuff you need to do it like this

maps2
1
λ  Kv2 = Kv#{d => 4}.

So it is similar to updating records but imho the true power is in retrieving data and pattern matching on maps. YES PATTERN MATCHING.

maps3
1
2
3
λ  #{b => B, c => C} = Kv2.
λ  B.
λ> 2

Isn’t this amazing!? You can use maps like in ruby and pattern match on them. And i was just about to scream from joy when…

i saw this.

Maps –> Json

Yes you can serialize and deserialize maps to json. WTF ?! Yes.

maps4
1
λ maps:to_json(Map) -> Bin

Calling maps:to_json you can make from map a json and by calling

maps5
1
2
λ maps:from_json(Bin) -> Map
λ maps:save_from_json(Bin) -> Map

Gives you option to load maps from binaries! C’mon! this is amazing. Safe version will explode if you try to flood VM with not existing atoms. This is useful! because atoms are never GC’ed!

Yes new MAPs are amazing! I love them :> This sloves so many problems and resolved so many situations when you had to write boilerplate code. Amazing work!

Page 287 Programing with Websockets and Erlang

Is also new thing i love this chapter as it shows you how to tackle real thing which is websockets :) it is very cool addition to the book and also a free sample so you can read it on your own before buying book.

http://media.pragprog.com/titles/jaerlang2/websockets.pdf

Addition of Dializer and Rebar

This is great from empiric point of view as Joe shows how to use rebar and build real life code using github. This is a great thing and worth reading. I love it :) You get real life example… i get here everything i lacked in previous book.

Summary

Every single new thing in the book is great. Stuff about E17 version of Erlang is just great. I don’t have E17 yet on my box but this will be by far best release of Erlang. In chapter about maps he talks about looking at ruby. I think it is a bit inspired by ruby builtin syntax for hashes by yet again. This is amazing and in future this will resolve so many problems and make many API’s much more useful. You don’t have to type anymore ton of _ if you want to pattern match on big tuple. You can match on a key.

I spend an hour reading the book on the train from London to Epsom and all I can say. It is great!. I love it and i love new changes!

Redis SETEX to the Rescue

| Comments

This is next mini entry about small thing that makes me happy in terms of changes in redis. Nothing new :D but still nice :D.

SET, EXPIRE, CHECK, ???, REPEAT

While using redis there is very common task we do. It is SET followed by EXPIRE. We do this when we want to cache for some period of time some data.

1
2
SET "user:1:token" "kuba"
EXPIRE "user:1:token" 5

This will set key user:1:token to value "kuba" and next set it to expire in 5 seconds. We can check Time to live on this key by using ttl command.

1
TTL "user:1:token"

this will return number of seconds that this key will be valid for or an negative number if its not valid anymore.

SETEX

SETEX Introduced in redis 2.0.0 command, lets you do both things SET and EXPIRE in one go. How do we use it ? Its simple!

1
SETEX <key> <seconds> <value>

It is not key, value seconds! :D:D example usage:

1
SETEX "key:to:home" 1500 "4b234ferg34ret34rasd32rs"

This will set for 1500 seconds key “key:to:home” to value “4b234ferg34ret34rasd32rs”. Pretty easy thing to do.

PSETEX

Since Redis 2.6.0 we can use new command it is PSETEX this is same as SETEX except it takes miliseconds not seconds so you can be more accurate in low latency / time situations

1
PSETEX "key:to:home" 15000 "4b234ferg34ret34rasd32rs"

Will set “key:to:home” to expire in 15 seconds, its important to notice that TTL will give you back amount of time units! so if you did PSETEX it will me miliseconds and if it is SETEX it is in seconds!.

Cheers!

How to Use Twitter for Something Else Than Pics of Cats

| Comments

So it was long since last post and i have in pipeline many posts on ODM’s, Genetic Algorithms, Streaming Algorithms Clojure and Haskell i was working on but nothing is finished ;/. Need time!

I’m using twitter

And most of the time it is just updates from friends, sometimes some news from hacker world that potentially could be interesting. I use twitter in a bit odd way first of all i post link to things i read during commute to work :) and i use twitter to post alters from my apps to me.

Why twitter ?

I think twitter is great because he is posting things to my phone also, so instead of building this really complex infrastructure i can use twitter to do everything.

How ?

This is simple i add to every app i make a bit of code to handle twitter. eg. method like this in ruby

1
2
3
4
5
6
7
8
9
10
11
def alert_notify!
  if @config[:notification_list]
    notifier = Twitter::Client.new(
      :oauth_token => @config[:"oauth_token"],
      :oauth_token_secret => @config[:"oauth_token_secret"]
    )
    @config[:notification_list].each do |entry|
      notifier.direct_message_create(entry, "#{Time.now}: I crushed! please go to logs and see what happpened!")
    end
  end
end

This is part of TwitterDriver/Agent class. And i wrap everything into exception handler when i get exception i log it and send notification to me via twitter direct message.

I love this way of using twitter because a year ago i thought it is only for sharing very random not useful messages about cats and sharing instagram pictures.

Try it yourself!