Gastón Ramos

"La mayoría de las gaviotas no se molesta en aprender sino las normas de vuelo más elementales: como ir y volver entre playa y comida. Para la mayoría de las gaviotas, no es volar lo que importa, sino comer. Para esta gaviota, sin embargo, no era comer lo que le importaba, sino volar. Más que nada en el mundo, Juan Salvador Gaviota amaba volar".

Tag: cloudfoundry

How to integrate Cloud Foundry with Gitlab

This article explains how to use gitlab and login against a cloud foundry authorization server.
Basically you will need 3 components:

  • uaa
  • gitlabhq
  • omniauth-uaa-oauth2

Installing UAA

In a real world probably you already has a  uaa server installed, but for the purpose of this article let’s start cloning and installing a fresh uaa;

git clone
cd uaa
mvn install

Gitlab installation

Ok. with this we have the uaa server installed so now we will install gitlab, I’m not going to explain the whole installation
process that you can find in the gitlabhq readme, for the purpose of this article you only needs to install the required gems and that’s all.

git clone
cd gitlabhq
cp config/database.yml.mysql config/database.yml
cp config/gitlab.yml.example config/gitlab.yml
bundle install

Omniauth uaa strategy installation

Now we have a basic installation of gitlab, but we need to add the omniauth-uaa-oauth2 strategy for omniauth to be able to login against uaa.
First we need to clone the omniauth-uaa-oauth2 and install it:

git clone
bundle install
bundle exec gem build omniauth-uaa-oauth2.gemspec
gem install omniauth-uaa-oauth2-*.gem

Enable omniauth and cloudfoundry strategy

Now let’s go back to the gitlabhq dir
and add this 3 lines into the Gemfile:

gem 'cf-uaa-lib', '1.3.10'
gem 'omniauth-uaa-oauth2

And then enable oauth in the gitlab.yml, basically we should change the omniauth enabled to true and add the cloudfoundry to the omniauth providers, so the gitlab.yml config file should look like this:

Now that we have all the required components installed and cofigured we should ensure that uaa and gitlab are up (“mvn tomcat:run” for uaa and “rails s” for gitlab) and then open a browser and go to http://localhost:3000 and as you can see in the image you should see the gitlab login page with a “Sign in with cloudfoundry” button in it.


That’s all.

Update: I’ve made a video to show the login screen

Deploying Cloud Foundry with bosh-lite

This is my tutorial of how to deploy cloud foundry using bosh-lite, as you may know there is a tutorial in the bosh-lite Readme file but there are some tiny step that the tutorial does not cover and I always forget and that is the reason why I’m writing my own.

git clone
cd bosh-lite
vagrant plugin install vagrant-omnibus
librarian-chef install
vagrant up
bosh target

bosh upload stemcell latest-bosh-stemcell-warden.tgz

All the previous steps are explained in the bosh-lite tutorial. You can see the original tutorial here:

Now we are going to clone the cf-release repo that is a bosh release for cloudfoundry;

cd ..
git clone

and now we need to point our the release dir for bosh:

export CF_RELEASE_DIR=~/cloudfoundry/cf-release/

Now we can upload the latest cloud foundry release which at this moment is 145:

bosh upload release ../cf-release/releases/cf-145.yml

Uploading release
release.tgz:   100% |oooooooooooooooooooooooooooooooooooooo|   1.1GB  53.3MB/s Time: 00:00:21
HTTP 500: 

Oooops :( I got an error, I’m trying to find where the bosh packages are stored, doing little bit of research
I found that bosh-lite use
simple_blobstore_server is a sinatra API that store.

Finally I restarted the Vm tried again and just worked:

bosh releases

| Name | Versions | Commit Hash |
| cf   | 145      | 41733e43+   |
(+) Uncommitted changes

Releases total: 1

So, we uploaded our bosh release cloud foundry, now we need a deploy manifest:


bosh status


  Name       Bosh Lite Director
  Version    1.5.0.pre.1117 (3300587c)
  User       admin
  UUID       365eb26d-146e-4d12-888f-d9249dbef375
  CPI        warden
  dns        enabled (domain_name: bosh)
  compiled_package_cache enabled (provider: local)
  snapshots  disabled

  Manifest   ~/cloudfoundry/bosh-lite/manifests/cf-manifest.yml

and run the deploy:

bosh deploy

You can see the vms created by bosh to check that the deploy worked:

Deployment `cf-warden'

Director task 228

Task 228 done

| Job/index                          | State   | Resource Pool | IPs          |
| cloud_controller/0                 | running | common        |  |
| dea_next/0                         | running | dea           | |
| health_manager/0                   | running | common        | |
| loggregator-trafficcontroller_z1/0 | running | common        |  |
| loggregator-trafficcontroller_z2/0 | running | common        |  |
| loggregator_z1/0                   | running | common        |  |
| loggregator_z2/0                   | running | common        |  |
| login/0                            | running | common        | |
| nats/0                             | running | common        |   |
| postgres/0                         | running | common        | |
| router/0                           | running | router        | |
| syslog_aggregator/0                | running | common        |   |
| uaa/0                              | running | common        |  |

VMs total: 13

and that’s all!

If you want to review some bosh concepts here are some documentation:

Installing Warden on debian Wheezy

What is warden?

“The project’s primary goal is to provide a simple API for managing isolated environments. These isolated environments — or containers — can be limited in terms of CPU usage, memory usage, disk usage, and network access. As of writing, the only supported OS is Linux.”

read more here:

Warden is a key component in the Cloud Foundry ecosystem. When you push a new app to Cloud Foundry a new container will be created.

So, lets go to the point, the idea is to install warden in a debian wheezy system, I added debian support to warden in my fork in the Altoro’s repo:

In order make a fresh installation we are going to use vagrant with virtualbox provider, lets start downloading the vagrant box from

axel -n 10

Then add the new box to vagrant:

vagrant box add debian-wheezy64

Then list all the available boxes to see if it was added ok:

vagrant box list

and you should see something like that:

debian-wheezy64      (virtualbox)
precise64            (virtualbox)

Then lets create a new folder and create our vagrant VM
using the box that we just already added:

mkdir testing-warden-on-debian
cd testing-warden-on-debian
vagrant init debian-wheezy64

Now we are ready to start installing warden in the VM:

vagrant ssh
sudo apt-get install -y build-essential debootstrap quota

Edit fstab and add this line:

sudo vi /etc/fstab
cgroup  /sys/fs/cgroup  cgroup  defaults  0   0

Now clone the warden repo and checkout the add-debian-rootfs
cd warden
git checkout add-debian-rootfs

add warden as shared folder in Vagrant file

edit Vagrant file and add this line:

config.vm.synced_folder "warden", "/warden"

then login into the vm with ssh and install all required gems:

vagrant ssh
cd /warden
sudo gem install bundler
sudo bundle

edit config/linux.yml and change the container_rootfs_path,
if you don’t change it the setup will be lost after you reboot the vm because it is pointed to /tmp by default.
I’ve created a new dir in /tmp-warden and pointed the root_fs to it.

After that you can run the setup

sudo bundle exec rake setup[config/linux.yml]

and when it finishes you will be able to start the warden server:

sudo bundle exec rake warden:start

and then run the client to be able to manage containers:

bundle exec bin/warden

Lets run some basic warden commands:

Create 2 new containers:

bundle exec bin/warden

warden> create
handle : 171hpgcl82u
warden> create
handle : 171hpgcl82v

List the already created containers:

warden> list
handles[0] : 171hpgcl82u
handles[1] : 171hpgcl82v

You can see the directories of the containers, replace [tmp-warden] with the folder that your filled in the config/linux.yml:

ls -l /[tmp-warden]/warden/containers/

drwxr-xr-x 9 root root 4096 Jul 15 13:55 171hpgcl82u
drwxr-xr-x 9 root root 4096 Jul 15 13:58 171hpgcl82v
drwxr-xr-x 2 root root 4096 Jul 15 12:18 tmp

If you take a look to the logs while you create a container, you can figure out that this is the flow more or less:

1. method: “set_deferred_success”


2. Create the container

 /home/gramos/src/altoros/warden/warden/root/linux/ /[tmp-warden]/warden/containers/171hpgcl831

3. method:”do_create”


4. Start the container


5. method: “write_snapshot”


6. method: “dispatch”


And thats all, if you have any comments feel free to post them here!