Gastón Ramos

"La mayoría de las gaviotas no se molesta en aprender sino las normas de vuelo más elementales: como ir y volver entre playa y comida. Para la mayoría de las gaviotas, no es volar lo que importa, sino comer. Para esta gaviota, sin embargo, no era comer lo que le importaba, sino volar. Más que nada en el mundo, Juan Salvador Gaviota amaba volar".

Category: cloudfoundry

How to integrate Cloud Foundry with Gitlab

This article explains how to use gitlab and login against a cloud foundry authorization server.
Basically you will need 3 components:

  • uaa
  • gitlabhq
  • omniauth-uaa-oauth2

Installing UAA

In a real world probably you already has a  uaa server installed, but for the purpose of this article let’s start cloning and installing a fresh uaa;

git clone
cd uaa
mvn install

Gitlab installation

Ok. with this we have the uaa server installed so now we will install gitlab, I’m not going to explain the whole installation
process that you can find in the gitlabhq readme, for the purpose of this article you only needs to install the required gems and that’s all.

git clone
cd gitlabhq
cp config/database.yml.mysql config/database.yml
cp config/gitlab.yml.example config/gitlab.yml
bundle install

Omniauth uaa strategy installation

Now we have a basic installation of gitlab, but we need to add the omniauth-uaa-oauth2 strategy for omniauth to be able to login against uaa.
First we need to clone the omniauth-uaa-oauth2 and install it:

git clone
bundle install
bundle exec gem build omniauth-uaa-oauth2.gemspec
gem install omniauth-uaa-oauth2-*.gem

Enable omniauth and cloudfoundry strategy

Now let’s go back to the gitlabhq dir
and add this 3 lines into the Gemfile:

gem 'cf-uaa-lib', '1.3.10'
gem 'omniauth-uaa-oauth2

And then enable oauth in the gitlab.yml, basically we should change the omniauth enabled to true and add the cloudfoundry to the omniauth providers, so the gitlab.yml config file should look like this:

Now that we have all the required components installed and cofigured we should ensure that uaa and gitlab are up (“mvn tomcat:run” for uaa and “rails s” for gitlab) and then open a browser and go to http://localhost:3000 and as you can see in the image you should see the gitlab login page with a “Sign in with cloudfoundry” button in it.


That’s all.

Update: I’ve made a video to show the login screen

How to create a bosh release to publish an static web site

0- Purpose of post.

In this post I’ll show you how to create a bosh release to serve an static web page, for this I will use bosh-lite which is a version of bosh for development and bosh-gen which is a bosh release generator.

1- Bosh-lite and Bosh-gen and create the release.

Let’s see a little the help the bosh-gen. As we can see we can do several things, let’s start creating our release to be called “static-syte”

==> bosh-gen new static-syte


2- Download apache2 sources and create the package for apache

We just created our release now we have to download the sources of our web server and put them in the folder src/. You can download apache from this url:

I already downloaded apache 2.2.25 and staved in ~/Downloads dir.

  => cd static-site
  => mkdir -p src/apache2
  => cp ~/Downloads/httpd-2.2.25.tar.gz src/apache2

Done this we need to create our first package for apache

=> bosh-gen help package


=> bosh-gen package apache2 -f src/apache2/httpd-2.2.25.tar.gz

We are going to build the package from the sources so we don’t need the blob

=> rm blobs/apache2/httpd-2.2.25.tar.gz

3- Take a look into packaging and spec files

As we can see has created 2 files spec and packaging, packaging is the script that contains the instructions for building our package. Spec contains among other things the dependencies and files needed to compile our package.

4- Create httpd Job

We created our bosh package now need to tell which is the process that we need to run to lift our web server and what settings it needs, for this we have to create a job

=> bosh-gen job httpd


5- Take a look into monit, httpd_ctl and spec files

As we can see it created some files, but let’s focus only on spec and monit files,
monit file basically has the data that monit needs to start and stop the job, in this
case it uses a helper script created by bosh-gen to help us debug our job.
To change the start command for this new job we need to edit httpd_ctl template.

Edit the jobs/httpd/templates/bin/httpd_ctl comment the pid_guard and add the start command by this:
/var/vcap/packages/apache2/bin/apachectl start  

after that it should look like this:


Edit monit file and change the pid file location to /var/vcap/packages/apache2/logs/

after that it should look like this:


6- Create release manifest file

We have a package (the bits needed to run our server) and the job (the configurations and scripts to lift the web server process) now we need to create and upload our release.

==> rm config/private.yml 
==> git add .
==> git commit -a -m "added package apache2 and job httpd"
==> bosh create release
==> bosh upload release


7- Create deploy manifest file

 ==> bosh-gen manifest httpd .


Then we need to point bosh to this deployment file:

bosh deployment httpd.yml

and let’s do our first deploy

bosh -n deploy


If we open a browser we will se the apache default page like this:


So yeaaaah, we just did our first deploy!!! but this page is not so funny and we need to have the might, the muscle to show our own site!
In the next post i will show you how to add a new package for our cool static page.

stay tuned will continue…

Deploying Cloud Foundry with bosh-lite

This is my tutorial of how to deploy cloud foundry using bosh-lite, as you may know there is a tutorial in the bosh-lite Readme file but there are some tiny step that the tutorial does not cover and I always forget and that is the reason why I’m writing my own.

git clone
cd bosh-lite
vagrant plugin install vagrant-omnibus
librarian-chef install
vagrant up
bosh target

bosh upload stemcell latest-bosh-stemcell-warden.tgz

All the previous steps are explained in the bosh-lite tutorial. You can see the original tutorial here:

Now we are going to clone the cf-release repo that is a bosh release for cloudfoundry;

cd ..
git clone

and now we need to point our the release dir for bosh:

export CF_RELEASE_DIR=~/cloudfoundry/cf-release/

Now we can upload the latest cloud foundry release which at this moment is 145:

bosh upload release ../cf-release/releases/cf-145.yml

Uploading release
release.tgz:   100% |oooooooooooooooooooooooooooooooooooooo|   1.1GB  53.3MB/s Time: 00:00:21
HTTP 500: 

Oooops :( I got an error, I’m trying to find where the bosh packages are stored, doing little bit of research
I found that bosh-lite use
simple_blobstore_server is a sinatra API that store.

Finally I restarted the Vm tried again and just worked:

bosh releases

| Name | Versions | Commit Hash |
| cf   | 145      | 41733e43+   |
(+) Uncommitted changes

Releases total: 1

So, we uploaded our bosh release cloud foundry, now we need a deploy manifest:


bosh status


  Name       Bosh Lite Director
  Version    1.5.0.pre.1117 (3300587c)
  User       admin
  UUID       365eb26d-146e-4d12-888f-d9249dbef375
  CPI        warden
  dns        enabled (domain_name: bosh)
  compiled_package_cache enabled (provider: local)
  snapshots  disabled

  Manifest   ~/cloudfoundry/bosh-lite/manifests/cf-manifest.yml

and run the deploy:

bosh deploy

You can see the vms created by bosh to check that the deploy worked:

Deployment `cf-warden'

Director task 228

Task 228 done

| Job/index                          | State   | Resource Pool | IPs          |
| cloud_controller/0                 | running | common        |  |
| dea_next/0                         | running | dea           | |
| health_manager/0                   | running | common        | |
| loggregator-trafficcontroller_z1/0 | running | common        |  |
| loggregator-trafficcontroller_z2/0 | running | common        |  |
| loggregator_z1/0                   | running | common        |  |
| loggregator_z2/0                   | running | common        |  |
| login/0                            | running | common        | |
| nats/0                             | running | common        |   |
| postgres/0                         | running | common        | |
| router/0                           | running | router        | |
| syslog_aggregator/0                | running | common        |   |
| uaa/0                              | running | common        |  |

VMs total: 13

and that’s all!

If you want to review some bosh concepts here are some documentation:

Installing Warden on debian Wheezy

What is warden?

“The project’s primary goal is to provide a simple API for managing isolated environments. These isolated environments — or containers — can be limited in terms of CPU usage, memory usage, disk usage, and network access. As of writing, the only supported OS is Linux.”

read more here:

Warden is a key component in the Cloud Foundry ecosystem. When you push a new app to Cloud Foundry a new container will be created.

So, lets go to the point, the idea is to install warden in a debian wheezy system, I added debian support to warden in my fork in the Altoro’s repo:

In order make a fresh installation we are going to use vagrant with virtualbox provider, lets start downloading the vagrant box from

axel -n 10

Then add the new box to vagrant:

vagrant box add debian-wheezy64

Then list all the available boxes to see if it was added ok:

vagrant box list

and you should see something like that:

debian-wheezy64      (virtualbox)
precise64            (virtualbox)

Then lets create a new folder and create our vagrant VM
using the box that we just already added:

mkdir testing-warden-on-debian
cd testing-warden-on-debian
vagrant init debian-wheezy64

Now we are ready to start installing warden in the VM:

vagrant ssh
sudo apt-get install -y build-essential debootstrap quota

Edit fstab and add this line:

sudo vi /etc/fstab
cgroup  /sys/fs/cgroup  cgroup  defaults  0   0

Now clone the warden repo and checkout the add-debian-rootfs
cd warden
git checkout add-debian-rootfs

add warden as shared folder in Vagrant file

edit Vagrant file and add this line:

config.vm.synced_folder "warden", "/warden"

then login into the vm with ssh and install all required gems:

vagrant ssh
cd /warden
sudo gem install bundler
sudo bundle

edit config/linux.yml and change the container_rootfs_path,
if you don’t change it the setup will be lost after you reboot the vm because it is pointed to /tmp by default.
I’ve created a new dir in /tmp-warden and pointed the root_fs to it.

After that you can run the setup

sudo bundle exec rake setup[config/linux.yml]

and when it finishes you will be able to start the warden server:

sudo bundle exec rake warden:start

and then run the client to be able to manage containers:

bundle exec bin/warden

Lets run some basic warden commands:

Create 2 new containers:

bundle exec bin/warden

warden> create
handle : 171hpgcl82u
warden> create
handle : 171hpgcl82v

List the already created containers:

warden> list
handles[0] : 171hpgcl82u
handles[1] : 171hpgcl82v

You can see the directories of the containers, replace [tmp-warden] with the folder that your filled in the config/linux.yml:

ls -l /[tmp-warden]/warden/containers/

drwxr-xr-x 9 root root 4096 Jul 15 13:55 171hpgcl82u
drwxr-xr-x 9 root root 4096 Jul 15 13:58 171hpgcl82v
drwxr-xr-x 2 root root 4096 Jul 15 12:18 tmp

If you take a look to the logs while you create a container, you can figure out that this is the flow more or less:

1. method: “set_deferred_success”


2. Create the container

 /home/gramos/src/altoros/warden/warden/root/linux/ /[tmp-warden]/warden/containers/171hpgcl831

3. method:”do_create”


4. Start the container


5. method: “write_snapshot”


6. method: “dispatch”


And thats all, if you have any comments feel free to post them here!