Secure salted passwords with ruby

Step one, Understand

So I’ve been playing with Sinatra and Redis over the past few months and as part of my more professional side I am creating a blog platform for my other website and as a result I wanted user authentication to ensure I and only I could update it, there’s a chance that at some point I may want to allow others to sign up and login but quite frankly not yet and this is over kill but none the less we learn an develop, so here’s the first step in building it from scratch.

Understanding some key concepts when it comes to authentication is key, so first some history lessons and why it is a bad idea to use those approaches today. Back in the day, way back when, people were trusting and assumed no evil of this world, we call these people naive; predominantly they relied on servers being nice and snug behind firewalls and locked cabinets. As such the passwords were saved in plain text in a database, or a file who cares, it’s human readable, so the attack on this is trivial. Let’s assume you are not silly enough to run the database over a network with no encryption and are instead running it locally well if I have access to your machine I will probably find it in a matter of minuets and if I have physical access within an hour it’s mine, all of them. On a side note, if you’re using plain text passwords anywhere you’re an idiot or in early stages of testing…

After realising this approach was bad, people thought, I can protect this and I will hash the password! wonderful, so you use md5, or sha it doesn’t matter which but for this example let’s say you chose md5. What you have done here is very cunningly created a password that is not human readable. However, be ashamed of your self if you still think this is secure, and here’s why. There are a lot of combinations (2^128 = 340 trillion, trillion, trillion) which to be honest is a lot! the chances of two clashes are so slim why worry about it! Wrong just plain wrong. So here’s why it’s bad, 1, people are idiots and for some strange reason we use typically words to make up passwords so if your password was “fridge” I’d imagine, what this means is if I as mr Hacker get hold of your DB I sit there while I run a dictionary style attack generating thousands of common words into md5 sums and before long I have a word that matches the same md5 sum as your password see this what’s better is because you’re a human it’ll probably work on all sites you use, Nice. The second problem is I don’t have to actually do that grunt work, people have already done it for me and they are called Rainbow tables so trivially I can download it and just do a simple query to find a phrase that matches yours. Don’t think it’s an issue, LinedIn did they fixed it perfectly.

Excellent, So now we’re getting to a point where we need something better, and this is where Salts come in. The basic concept is you type in a password and I as a server concatenate a random string to it and generate a hash. This over comes the rainbow table, because the likely hood of someone having generated a rainbow table from my random salt is highly unlikely, certainly a lot less likely than 2^128. However, lets assume I’m big web provider that got hacked and lost everyones password, “Ha! I used a salt, Good luck cracking that!” they say. Sure, a few points, 1, the salt is stored in the DB, 2, Computers are quicker than they use to be. With the advances in graphic processor cracking and amazon boxes Password cracking is more or less a matter of time & money, but how much is the key.

Using a single salt on all passwords is still bad, the best that I know of today is to use a unique salt for every password. Even if all users have the same password they all have unique hashes, and this is the corner stone. For every user that joins up, generate a secure random salt, add it to the password and generate a hash. So for every user a massive amount of time would be needed to crack just one password, so unique hashes very good.

The code

So history lesson over, now some code; as identified above the best way was to generate a unique salt for every user and then hash their password. This isn’t hard but you do need to ensure the salt is securely random, else you may just be generating something not as secure as you thought.

Have a look at this gist Auth.rb, so useful point, the salt should be large, so if you produce a 32 bytes hash, your salt should be at least 32 bytes as well. It’s a straight forward lib that will generate and allow you to get passwords back out.

Here’s an example of it in use.

require_relative 'auth'
#This is from the web front end so you can see how to use the Lib to check a hash
def user_auth_ok?(user,pass)
  #Get user's Hash from DB
  user_hash = $db.get_user_hash(user)
  #Validate hash
  if Auth.hash_ok?(user_hash,pass)
    #User authenticated
    return true
  else
    #user is not authenticated
    return false
  end
end

#As A side point This bit os from the backend that writes it to the DB 
#If it was the chunk from what the website used it would simply call a method that took username and password
#so probably not useful...
def add_user(username,password)
  #Poke the appropriate keys, create default users, example post etc.
  uid = @redis.incr('users')
  salt = Auth.gen_salt()
  hash = Auth.gen_hash(salt,password)
  $logr.debug("Salt = #{salt}, hash = #{hash}")
  @redis.set("users:#{uid}:#{username}",hash)
end

def edit_user_password(username, password, newPassword)
  user = @redis.keys("users:*:#{username}")
  if user.size < 1
    $logr.debug("No user #{username} found")
  else
    #Validate old password
    hash = @redis.get(user[0])
    if Auth.hash_ok?(hash, password)
      $logr.info("Setting a new password for #{username}")
      newHash = Auth.gen_hash(Auth.get_salt(hash),newPassword)
      @redis.set(user[0],newHash)
    else
      $logr.info("password incorrect")
      #TODO - Need exception classes to raise Auth failure
    end
  end
end

Releasing your first Devops Application

First the worry

When it comes to releasing the first version of an application it’s always worth weighing up the constraints of your environment and the time frame in which the task was delivered versus the skill set available. Inevitably as a skilled DevOps professional you want to do a good job, well done you; however you have to be strong and realise it is not about delivering perfection from day one but about the journey you must take to get there.

I recall the first deployment I did for a version 1 and every time I do one since then I get better, be it a bit more focused or a better starting point. The very first one I did was all over the place, no real configuration management, quite a few manual steps but a well written process, unfortunately that project remained in the depths of secrecy and I ended up moving on.

Constantly I see over engineering and complication added to projects and the root cause of this is worry, I know, I use to be there doing it, it is difficult to step back and be objective to what the business needs, but as a DevOps professional that is your job. When delivering a solution try and remember these things to help you worry less and focus more:

  1. Before being perfect you must first just “be”
  2. When in doubt, do less
  3. If you do not know when the site is down you will not have a job
  4. Always have a backup

Then the delivery

The above list is rather quite useful, use it as a bit of guidance. Starting with point 1, some elaboration; when delivering a solution the most important thing is to deliver the solution, so many people forget this part and focus on the technicalities or whether or not it is the “best” way to deliver the solution. In reality, who cares, no one will care when you are in that meeting explaining why you’re late and have not got a working solution.

Getting stuck in the detail is a horrible place to be and sometimes it gets too involved or too complicated leading to much discussion and inevitably the solution comes out complicated and will take a while to deliver, in these situations point 2 comes in, just do less. It sounds silly but if you’re rushing around struggling to meet a deadline then you need to take things out of scope, and focus on what the actual solution needs to be, maybe you have to have a manual step, then at a later point you can automate it.

The last two points are along the same lines, and those lines are things that get you fired. If your site is down and you don’t know that it is totally down, that’s a bad thing; likewise loosing data is considered pretty poor. However do not get stuck in the trap of assuming you must have full monitoring of every server or that the backup needs to be anything more than a cron job for now.

The “trick” is always around identifying what needs to be done and could be done, by focusing on what needs to be done first you can then come back to improve the rest.

build, improve, rinse, repeat

As touched on earlier You are allowed to cut corners and focus on what is necessary, failure to do this will just lead to delays and a business that is getting rapidly turned off of DevOps. The first release you do can be complete and utter crap, it can be all manual, with nothing more than a simple web check on port 80, that is okay. The important thing is you deliver to the deadline, You have mitigated the main risks of not knowing when the site is down or the potential loss of data, heck even having single points failure are allowed as long as you can clearly identify what the risk is and a solution if that were to happen. In fact, I’d almost go as far to say this is expected.

The key is as always to improve, little and often. Step 1, Manual, Step 2, automate what is easy, Step 3, automate the rest. It has never been and will never be about perfection from version 0.1 onwards you just need to improve a little each time in line with that golden view of what perfection is. As long as you know what the end goal is you can work towards it, just don’t get carried away by trying to deliver it all for the first version.