Cloud Foundry Installation Step by Step

Hi Guys,

Enjoy the installation of Cloud Foundry with OpenStack.

Cloud-Foundry-Amitesh

We are here going to install an open source cloud computing platform as a service (PaaS) originally developed by VMware and now owned by Pivotal Software – a joint venture by EMC, VMware and General Electric. Cloud Foundry was designed and developed by a small team from Google led by Derek Collison and was originally called project B29.[1][2][3] It is an Infrastructure as a Service (IaaS), comparable to Google Storage and Amazon S3 online storage services.

Preparing the openstack

a) change in /etc/nova/api-paste.ini

[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory

limits = ( POST, *, .*, 9999, MINUTE ); ( POST, */servers, ^/servers, 9999, DAY );( PUT, *, .*, 9999, MINUTE ); ( GET, *changes-since*, .*changes-since.*, 9999, MINUTE ); ( DELETE, *, .*, 9999, MINUTE )

Reason: when bosh uses manifest file to deploy, it creates concurent request to create temporary

VMs to compile packages(for the CF releases)

b) restart nova so that above changes will be reflected in the openstack.

c) create flavors in Horizon (Openstack Dashboard)

$ nova flavor-list


   +————————————–+—————-+———–+——+———–+——+——-+————-+———–+

| ID                                   | Name           | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+————————————–+—————-+———–+——+———–+——+——-+————-+———–+

| 29626bcf-2c96-41d2-9859-4488e0b8d0da | bosh.bootstrap | 4096      | 40   | 20        |      | 2     | 1.0         | False     |

| 79ddaea9-76b8-40a2-b296-7f8a0aa40019 | bosh.core      | 4096      | 20   | 10        |      | 2     | 1.0         | True      |

| 873e987d-03ac-4379-9db6-8ae09ed9d478 | bosh.microbosh | 4096      | 20   | 10        |      | 2     | 1.0         | True      |

| a4710660-8cde-4ffc-acef-5ed59ada438c | bosh.compile   | 4096      | 20   | 10        |      | 2     | 1.0         | True      |

+————————————–+—————-+———–+——+———–+——+——-+————-+———–+

d) Create security group for bosh:

nova secgroup – list -> list of security groups for the project

nova secgroup – list-rules default –  view the details of the “open” security group

## — configure firewall access rules

$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 2>/dev/null                ## allow ping

$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 2>/dev/null                 ## allow ssh

$ nova secgroup-add-rule default tcp 6868 6868 0.0.0.0/0 2>/dev/null         ## bosh

$ nova secgroup-add-rule default tcp 25555 25555 0.0.0.0/0 2>/dev/null     ## bosh

$ nova secgroup-add-rule default tcp 25889 25889 0.0.0.0/0 2>/dev/null     ## bosh

Additional ports to be opened as how in below below command.!

bosh_agent_https tcp 6868

bosh_blobstore tcp 25250

bosh_director tcp 25555

bosh_nats_server tcp 4222

bosh_registry tcp 25777

ssh tcp 22 22

$ nova secgroup-add-rule default tcp 25250 25250 0.0.0.0/0 2>/dev/null

$ nova secgroup-add-rule default tcp 4222 4222 0.0.0.0/0 2>/dev/null

$ nova secgroup-add-rule default tcp 25777 25777 0.0.0.0/0 2>/dev/null

Create a bootstrap machine(m1.medium[cf-installer]) by launching a ubuntu 12.04 image on openstack.

2) In CF-Installer Machine:

a) Install RVM and Ruby


$ apt-get update
$ apt-get install git libxml2-dev libxslt1-dev -y –force-yes

$ sudo gpg –keyserver hkp://keys.gnupg.net –recv-keys D39DC0E3

or if it fails then try this

$ curl -sSL https://rvm.io/mpapis.asc | sudo gpg –import –

$ curl -L https://get.rvm.io | bash -s stable –autolibs=enabled –ruby=2.0.0-p195
$ source /usr/local/rvm/scripts/rvm

$echo 'gem: --no-document' >> ~/.gemrc

Make sure RVM and Ruby both are installed

$rvm -v It should have the version greater than 1

$ruby -v It should have the version greater than 2

Install below prerequisites ubuntu libraries


$ gem install guard-rspec fakeweb awesome_print --no-ri --no-rdoc
$ gem install bosh-bootstrap -v 0.11.5 --no-ri –no-rdoc

3) BOSH Command Line Interface

a) Prerequisites on Ubuntu Trusty

Install pre-requisites on Ubuntu Trusty before running gem install commands:

$ apt-get install build-essential ruby ruby-dev libxml2-dev libsqlite3-dev libxslt1-dev libpq-dev libmysqlclient-dev

Note: Installing the BOSH CLI requires Ruby 1.9.3, 2.0.x, or 2.1.x.

b) Install bosh client

BOSH Command Line Interface (CLI) is used to interact with MicroBOSH and BOSH. To install BOSH CLI with the MicroBOSH plugin:

$ gem install bosh_cli bosh_cli_plugin_micro

OR

$ gem install httpclient -v 2.2.4

then

$ gem install bosh_cli_plugin_micro -v "~> 1.5.0.pre" --source https://s3.amazonaws.com/bosh-

jenkins-gems/

  1. Validate your OpenStack

Create a ~/.fog file and copy the below content:

:openstack:

  		:openstack_auth_url:  http://HOST_IP:5000/v2.0/tokens
  		:openstack_api_key:   PASSWORD
  		:openstack_username:  USERNAME
  		:openstack_tenant:    PROJECT_NAME
  		:openstack_region:    REGION # Optional
           Note: You need to include /v2.0/tokens in the auth URL above.  

Install the fog application in your terminal, then run it in interactive mode:

$ gem install fog

$ fog openstack
          >> Compute[:openstack].servers
           []

The [] is an empty array in Ruby. You might see a long list of servers being displayed if your OpenStack tenancy/project already contains provisioned servers.

Can you access OpenStack metadata service from a virtual machine?

According to the OpenStack Documentation, the Compute service uses a special metadata service to enable virtual machine instances to retrieve instance-specific data. The default stemcell for use with BOSH retrieves this metadata for each instance of a virtual machine that OpenStack manages in order to get some data injected by the BOSH director.

You must ensure that virtual machines you boot in your OpenStack environment can access the metadata service at http://169.254.169.254.

Then execute the curl command to access the above URL. You should see a list of dates similar to the example below.

$ curl http://169.254.169.254

1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04


Can you ping one virtual machine from another?

Cloud Foundry requires that virtual machines be able to communicate with each other over the OpenStack networking stack. If networking is misconfigured for your instance of OpenStack, BOSH may provision VMs, but the deployment of Cloud Foundry will not function correctly because the VMs cannot properly orchestrate over NATS and other underlying technologies.

Try the following to ensure that you can communicate from VM to VM:

Create a security group for your virtual machines called ping-test.

  1. Open the OpenStack dashboard, and click on Access & Security in the left-hand menu. Click Create Security Group on the upper-right hand corner of the list of security groups.

  2. Under Name, enter ping-test. Enter ping-test in the Description field.

  3. Click Create Security Group.

  4. The list of security groups should now contain ping-test. Find it in the list and click Edit Rules.

  5. The list of rules should be blank. Click Add Rule.

  6. For Rule, select Custom ICMP Rule.

  7. For Type, enter -1.

  8. For Code, enter -1.

  9. For Remote, select Security Group.

  10. For Security Group, select ping-test (Current).

  11. Click Add.

Note: If your interface contains the Direction field, use the default Direction entry to create an Ingress rule. You must create an Egress rule that matches the Ingress rule settings.

From your OpenStack dashboard, create two VMs and open the console into one of them through the Console tab on its Instance Detail page. Make sure that you put these virtual machines into the ping-test security group. Wait for the terminal to appear and login.

Look at the list of instances in the OpenStack dashboard and find the IP address of the other virtual machine. At the prompt, issue the following command (assuming your instance receives the IP address 172.16.1.2:

$ ping 172.16.1.2
PING 172.16.1.2 (172.16.1.2) 56(84) bytes of data.
64 bytes from 172.16.1.2: icmp_seq=1 ttl=64 time=0.095 ms
64 bytes from 172.16.1.2: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.1.2: icmp_seq=3 ttl=64 time=0.080 ms

Can you invoke large numbers of API calls?

Use the following commands to determine if you are affected by API throttling:

$ gem install fog
$ fog openstack
>> 100.times { p Compute[:openstack].servers }

5) Security Groups for Cloud Foundry and BOSH

OpenStack offer Security Groups as a mechanism to restrict inbound traffic to servers. The examples below show the Security Groups that are referenced in other sections of this documentation.

http://davanum.wordpress.com/2014/06/24/running-cloud-foundrys-micro-bosh-on-latest-devstack/

a) Default:

$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

b) SSH

$ nova secgroup-create ssh ssh

$ nova secgroup-add-rule ssh udp 68 68 0.0.0.0/0

$ nova secgroup-add-rule ssh tcp 22 22 0.0.0.0/0

$ nova secgroup-add-rule ssh icmp -1 -1 0.0.0.0/0

c) Bosh

$ nova secgroup-create bosh bosh

$ nova secgroup-add-group-rule bosh bosh tcp 1 65535

$ nova secgroup-add-rule bosh tcp 4222 4222 0.0.0.0/0

$ nova secgroup-add-rule bosh tcp 6868 6868 0.0.0.0/0

$ nova secgroup-add-rule bosh tcp 25250 25250 0.0.0.0/0

$ nova secgroup-add-rule bosh tcp 25555 25555 0.0.0.0/0

$ nova secgroup-add-rule bosh tcp 25777 25777 0.0.0.0/0

$ nova secgroup-add-rule bosh tcp 53 53 0.0.0.0/0

$ nova secgroup-add-rule bosh udp 68 68 0.0.0.0/0

$ nova secgroup-add-rule bosh udp 53 53 0.0.0.0/0

d) cf-public

$ nova secgroup-create cf-public cf-public

$ nova secgroup-add-rule cf-public udp 68 68 0.0.0.0/0

$ nova secgroup-add-rule cf-public tcp 80 80 0.0.0.0/0

$ nova secgroup-add-rule cf-public tcp 443 443 0.0.0.0/0

e) cf-private

$ nova secgroup-create cf-private cf-private

$ nova secgroup-add-rule cf-private udp 68 68 0.0.0.0/0

$ nova secgroup-add-group-rule cf-private cf-private tcp 1 65535

6) Deploying MicroBOSH on Openstack

Installation of BOSH is done using MicroBOSH, which is a single VM that includes all of the BOSH components. You need BOSH to manage and deploy a distributed system on multiple VMs.

a)OpenStack Key pairs

Create or import a new OpenStack keypair, name it (e.g. microbosh). Store the private key in a well known location, as we will need it to deploy MicroBOSH.

b) Validate your OpenStack

Validate your target OpenStack environment in preparation for installing MicroBOSH.

c)Create manifest file Dir

Create a deployments directory to store your deployment manifest files:

$ mkdir -p ~/bosh-workspace/deployments/microbosh-openstack

$ cd ~/bosh-workspace/deployments/microbosh-openstack


d) Create a microbosh.yml file and copy the below content:

name: microbosh-openstack

logging:

 level: DEBUG

network:

 type: dynamic

 vip: 169.144.105.89 # Optional

 #ip: 169.144.105.89

 cloud_properties:

   net_id: 4e13ea56-bd73-440a-896e-78ff5785c44f

resources:

 persistent_disk: 16384

 cloud_properties:

   instance_type: m1.medium

cloud:

 plugin: openstack

 properties:

   openstack:

     auth_url: http://169.144.105.70:5000/v2.0/tokens

     username: admin

     api_key: redhat

     tenant: admin

     default_security_groups: [“ssh”, “bosh_agent_https”,”bosh_nats_server”,”bosh_blobstore”,”bosh_director”,”bosh_registry”]

     default_key_name: mykey

private_key: /root/.microbosh/ssh/mykey.pem

     #boot_from_volume: true

apply_spec:

 agent:

   blobstore:

     address: 169.144.105.89

   nats:

     address: 169.144.105.89

     ping_interval: 60

     ping_max_outstanding: 30

e) Download MicroBOSH stemcell:

Create a stemcells directory to store your stemcell files:

$ mkdir -p ~/bosh-workspace/stemcells

$ cd ~/bosh-workspace/stemcells

$ bosh public stemcells

$ bosh download public stemcell bosh-stemcell-2427-openstack-kvm-ubuntu.tgz

f) Deploy MicroBOSH

Set the MicroBOSH deployment file to use:

$ cd ~/bosh-workspace/deployments

$ bosh micro deployment microbosh-openstack

This command will output:

WARNING! Your target has been changed to `https://<microbosh_ip_address>:25555′!

Deployment set to ‘~/bosh-workspace/deployments/microbosh-openstack/microbosh.yml’

g) Deploy the MicroBOSH:

$ bosh micro deploy ~/bosh-workspace/stemcells/bosh-stemcell-XXXX-openstack-kvm-ubuntu.tgz

h) Testing your MicroBOSH

Set target – To set your MicroBOSH target use the target command:

$ bosh target <microbosh_ip_address>

This command will ask for the admin credentials. Enter admin when prompted for both username

and password.

i) Create a new user

To create a new user use the create user command:

$ bosh create user

Then you can login with the new user credentials:

$ bosh login

Check status

$ bosh status

j) SSH

You can ssh to your MicroBOSH VM using the private key set at the cloud properties section of

your MicroBOSH deployment file:

$ ssh -i <path_to_microbosh_keypar_private_key> vcap@<microbosh_ip_address>

7) Uploading a BOSH Stemcell

a) Prerequisites

You must deploy and target MicroBOSH or BOSH.

b) Upload BOSH stemcell

Upload a BOSH Stemcell to the BOSH Director using the bosh upload command:

$ bosh upload stemcell bosh-stemcell-latest-openstack-kvm-ubuntu.tgz

c) Check BOSH Stemcell

To confirm that the BOSH Stemcell has been loaded into your BOSH Director use the bosh

stemcells command:

$ bosh stemcells

8) Deploying Cloud Foundry on Openstack using BOSH

Note: Run all the commands in this topic from the ~/deployments directory that you created in the

Deploying MicroBOSH on Openstack topic.

a) Prerequisites

To deploy Cloud Foundry to Openstack, you must first complete the following steps:

  1. Validate your OpenStack Instance.

  2. Install the BOSH Command Line Interface (CLI).

  3. Deploy BOSH to your Openstack environment. For instructions, see Deploying BOSH using MicroBOSH on Openstack.

  4. Provision an IP address and set your DNS to map * records to this IP address. For example, if you use mycloud.com domain as the base domain for your Cloud Foundry deployment, set a * A record for this zone mapping to the IP address that you provision.

  5. Create an Openstack security group named cf. Configure the cf security group as follows:

  1. Open all ports of the VMs in the cf security group to all other VMs in the cf security group. This allows the VMs in the cf security group to communicate with each other.

  2. Open port 22 for administrator SSH access.

  3. Open port 80 for HTTP traffic.

b) Create Directory structure for CF

$ mkdir /var/vcap/
$ cd
/var/vcap/
$ mkdir releases stemcells systems

c) Creating a cf-release

Firstly, grab the cf-release source. Then update all the submodules, and create a new release.

$ cd /var/vcap/releases

$ git clone https://github.com/cloudfoundry/cf-release.git

$ cd cf-release

$ bundle update

$ ./update

$ bundle exec bosh create release release/cf-170.yml

d) Uploading cf-release to BOSH

Target your Micro BOSH, and upload the release to it:

$ bundle exec bosh target <bosh_ip_address>

$bundle exec bosh upload release release/cf-170.tgz

e) Create a cf-release deploy manifest

Next we need to tailor a deployment manifest that is suitable for our deploy target in Openstack.

<%

director_uuid = ‘4e6c3124-f826-43d0-abc8-aaad278be7b4’

static_ip = ‘169.144.105.90’

root_domain = “cfdemo.esp-test.egi.ericsson.com”

deployment_name = ‘cf’

cf_release = ‘170’

protocol = ‘http’

common_password = ‘c1oudc0wc1oudc0w’

%>

name: <%= deployment_name %>

director_uuid: <%= director_uuid %>

releases:

– name: cf

  version: <%= cf_release %>

compilation:

 workers: 3

 network: default

 reuse_compilation_vms: true

 cloud_properties:

   instance_type: m1.large

update:

 canaries: 0

 canary_watch_time: 30000-600000

 update_watch_time: 30000-600000

 max_in_flight: 32

 serial: false

networks:

 – name: default

   type: dynamic

   cloud_properties:

     security_groups:

       – default

       – bosh

       – cf-private

 – name: external

   type: dynamic

   cloud_properties:

     security_groups:

       – default

       – bosh

       – cf-public

 – name: floating

   type: vip

   cloud_properties: {}

resource_pools:

 – name: common

   network: default

   size: 14

   stemcell:

     name: bosh-openstack-kvm-ubuntu

     version: 2427

   cloud_properties:

     instance_type: m1.small

 – name: large

   network: default

   size: 3

   stemcell:

     name: bosh-openstack-kvm-ubuntu

     version: 2427

   cloud_properties:

     instance_type: m1.medium

jobs:

 – name: nats

   templates:

     – name: nats

     – name: nats_stream_forwarder

   instances: 1

   resource_pool: common

   networks:

     – name: default

       default: [dns, gateway]

 – name: syslog_aggregator

   templates:

     – name: syslog_aggregator

   instances: 1

   resource_pool: common

   persistent_disk: 10240

   networks:

     – name: default

       default: [dns, gateway]

 – name: nfs_server

   templates:

     – name: debian_nfs_server

   instances: 1

   resource_pool: common

   persistent_disk: 10240

   networks:

     – name: default

       default: [dns, gateway]

 – name: postgres

   templates:

     – name: postgres

   instances: 1

   resource_pool: common

   persistent_disk: 10240

   networks:

     – name: default

       default: [dns, gateway]

   properties:

     db: databases

 – name: uaa

   templates:

     – name: uaa

   instances: 1

   resource_pool: common

   networks:

     – name: default

       default: [dns, gateway]

 – name: loggregator

   templates:

     – name: loggregator

   instances: 1

   resource_pool: common

   networks:

     – name: default

       default: [dns, gateway]

 – name: trafficcontroller

   templates:

     – name: loggregator_trafficcontroller

   instances: 1

   resource_pool: common

   networks:

     – name: default

       default: [dns, gateway]

 – name: cloud_controller

   templates:

     – name: cloud_controller_ng

   instances: 1

   resource_pool: common

   networks:

     – name: default

       default: [dns, gateway]

   properties:

     ccdb: ccdb

 – name: cloud_controller_worker

   templates:

     – name: cloud_controller_worker

   instances: 1

   resource_pool: common

   networks:

     – name: default

       default: [dns, gateway]

   properties:

     ccdb: ccdb

 – name: clock_global

   templates:

     – name: cloud_controller_clock

   instances: 1

   resource_pool: common

   networks:

     – name: default

       default: [dns, gateway]

   properties:

     ccdb: ccdb

 – name: etcd

   templates:

     – name: etcd

   instances: 1

   resource_pool: common

   persistent_disk: 10024

   networks:

     – name: default

       default: [dns, gateway]

 – name: health_manager

   templates:

     – name: hm9000

   instances: 1

   resource_pool: common

   networks:

     – name: default

       default: [dns, gateway]

 – name: dea

   templates:

     – name: dea_logging_agent

     – name: dea_next

   instances: 3

   resource_pool: large

   networks:

     – name: default

       default: [dns, gateway]

 – name: router

   templates:

     – name: gorouter

   instances: 1

   resource_pool: common

   networks:

     – name: default

       default: [dns, gateway]

 – name: haproxy

   templates:

     – name: haproxy

   instances: 1

   resource_pool: common

   networks:

     – name: external

       default: [dns, gateway]

     – name: floating

       static_ips:

         – <%= static_ip %>

   properties:

     networks:

       apps: external

properties:

 domain: <%= root_domain %>

 system_domain: <%= root_domain %>

 system_domain_organization: ‘admin’

 app_domains:

   – <%= root_domain %>

 haproxy: {}

 networks:

   apps: default

 nats:

   user: nats

   password: <%= common_password %>

   address: 0.nats.default.<%= deployment_name %>.microbosh

   port: 4222

   machines:

     – 0.nats.default.<%= deployment_name %>.microbosh

     #- 1.nats.default.<%= deployment_name %>.microbosh

     #- 2.nats.default.<%= deployment_name %>.microbosh

 syslog_aggregator:

   address: 0.syslog-aggregator.default.<%= deployment_name %>.microbosh

   port: 54321

 nfs_server:

   address: 0.nfs-server.default.<%= deployment_name %>.microbosh

   network: “*.<%= deployment_name %>.microbosh”

   idmapd_domain: “localdomain”

 debian_nfs_server:

   no_root_squash: true

 loggregator_endpoint:

   shared_secret: <%= common_password %>

   host: 0.trafficcontroller.default.<%= deployment_name %>.microbosh

 loggregator:

   servers:

     zone:

       –  0.loggregator.default.<%= deployment_name %>.microbosh

 traffic_controller:

   zone: ‘zone’

 logger_endpoint:

   use_ssl: <%= protocol == ‘https’ %>

   port: 80

 ssl:

   skip_cert_verify: true

 router:

   endpoint_timeout: 60

   status:

     port: 8080

     user: gorouter

     password: <%= common_password %>

   servers:

     z1:

       – 0.router.default.<%= deployment_name %>.microbosh

     z2: []

 etcd:

   machines:

     – 0.etcd.default.<%= deployment_name %>.microbosh

 dea: &dea

   disk_mb: 102400

   disk_overcommit_factor: 2

   memory_mb: 15000

   memory_overcommit_factor: 3    

   directory_server_protocol: <%= protocol %>

   mtu: 1460

   deny_networks:

     – 169.254.0.0/16 # Google Metadata endpoint

 dea_next: *dea

 disk_quota_enabled: false

 dea_logging_agent:

   status:

     user: admin

     password: <%= common_password %>

 databases: &databases

   db_scheme: postgres

   address: 0.postgres.default.<%= deployment_name %>.microbosh

   port: 5524

   roles:

     – tag: admin

       name: ccadmin

       password: <%= common_password %>

     – tag: admin

       name: uaaadmin

       password: <%= common_password %>

   databases:

     – tag: cc

       name: ccdb

       citext: true

     – tag: uaa

       name: uaadb

       citext: true

 ccdb: &ccdb

   db_scheme: postgres

   address: 0.postgres.default.<%= deployment_name %>.microbosh

   port: 5524

   roles:

     – tag: admin

       name: ccadmin

       password: <%= common_password %>

   databases:

     – tag: cc

       name: ccdb

       citext: true

 ccdb_ng: *ccdb

 uaadb:

   db_scheme: postgresql

   address: 0.postgres.default.<%= deployment_name %>.microbosh

   port: 5524

   roles:

     – tag: admin

       name: uaaadmin

       password: <%= common_password %>

   databases:

     – tag: uaa

       name: uaadb

       citext: true

 cc: &cc

   srv_api_uri: <%= protocol %>://api.<%= root_domain %>    

   jobs:

     local:

       number_of_workers: 2

     generic:

       number_of_workers: 2

     global:

       timeout_in_seconds: 14400

     app_bits_packer:

       timeout_in_seconds: null

     app_events_cleanup:

       timeout_in_seconds: null

     app_usage_events_cleanup:

       timeout_in_seconds: null

     blobstore_delete:

       timeout_in_seconds: null

     blobstore_upload:

       timeout_in_seconds: null

     droplet_deletion:

       timeout_in_seconds: null

     droplet_upload:

       timeout_in_seconds: null

     model_deletion:

       timeout_in_seconds: null

   bulk_api_password: <%= common_password %>

   staging_upload_user: upload

   staging_upload_password: <%= common_password %>

   quota_definitions:

     default:

       memory_limit: 10240

       total_services: 100

       non_basic_services_allowed: true

       total_routes: 1000

       trial_db_allowed: true

   resource_pool:

     resource_directory_key: cloudfoundry-resources

     fog_connection:

       provider: Local

       local_root: /var/vcap/nfs/shared

   packages:

     app_package_directory_key: cloudfoundry-packages

     fog_connection:

       provider: Local

       local_root: /var/vcap/nfs/shared

   droplets:

     droplet_directory_key: cloudfoundry-droplets

     fog_connection:

       provider: Local

       local_root: /var/vcap/nfs/shared

   buildpacks:

     buildpack_directory_key: cloudfoundry-buildpacks

     fog_connection:

       provider: Local

       local_root: /var/vcap/nfs/shared        

   install_buildpacks:

     – name: java_buildpack

       package: buildpack_java

     – name: ruby_buildpack

       package: buildpack_ruby

     – name: nodejs_buildpack

       package: buildpack_nodejs

     – name: go_buildpack

       package: buildpack_go

   db_encryption_key: <%= common_password %>

   hm9000_noop: false

   diego: false

   newrelic:

     license_key: null

     environment_name: <%= deployment_name %>       

 ccng: *cc

 login:

   enabled: false

 uaa:

   url: <%= protocol %>://uaa.<%= root_domain %>

   no_ssl: <%= protocol == ‘http’ %>

   cc:

     client_secret: <%= common_password %>

   admin:

     client_secret: <%= common_password %>

   batch:

     username: batch

     password: <%= common_password %>

   clients:

     cf:

       override: true

       authorized-grant-types: password,implicit,refresh_token

       authorities: uaa.none

       scope: cloud_controller.read,cloud_controller.write,openid,password.write,cloud_controller.admin,scim.read,scim.write

       access-token-validity: 7200

       refresh-token-validity: 1209600

     admin:

       secret: <%= common_password %>

       authorized-grant-types: client_credentials

       authorities: clients.read,clients.write,clients.secret,password.write,scim.read,uaa.admin

   scim:

     users:

     – admin|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin,uaa.admin,password.write

     – services|<%= common_password %>|scim.write,scim.read,openid,cloud_controller.admin

   jwt:

     signing_key: |

       —–BEGIN RSA PRIVATE KEY—–

       MIIEpAIBAAKCAQEAq/Q6jUR/Vko3VxT9RWaIGQij8YSHHl2+HoYtqwn10JD96lQs

       pCDY2iIfBrRa23p1GjF9F2BJtB2dJjxrKVyTrQeI/lNDhxjw8Pitx3QYsDDBtmWq

       4dt74+BPJ9vSQKK0fXKTL+w49W1Lr4uSnhP/wJfHCCbTzCDG8Pto5MLEMxEHm7pS

       ooqxCEpQYwkxX4jQ1RNTAMxarRi1UZgt3KUTDsDhowqDst/RVZ93hhnezLKnloC4

       njLWgkG5QVEms1Fn4Wk1PC+OnHEZLXMbSpnNeqYc7yOsZSMcwlxfvQz1Llxfu/aR

       4llZxPbUzIZv4TZJjvSpm0/Ogr7YQJYszLtt9wIDAQABAoIBACawq0QB94zY4h7L

       8DjfWxwW35yGL0jb2t1PX5MuiIrHNPq2udysL17Vcpm1lwPvR83++KB739mRGDz0

       N0B1Ph0epupinb0WFZCCw8cvDicGsW9y7MIo+nVJkUXspiA4+9eGIiwUQLSoRPFY

       vEKpSVByViw1YE57yYeLagye7jp26ceLknyAVOxoBq36SefKNF0hU5GE0zuNe5yw

       M5gITviAgW7SGiUke/+1uDpaa+qaf55q5Zd3UgxXvhy0S+/UVgXNViANg7nZlUr3

       QA+U7IRotBSg1Vn+2BocadAtz2bV9nZj7H+Z/XILtETWNxk7yGcvIbHEp6q8WP6h

       moflzOECgYEA4oXVdf+ffH10pjpZby9olzDL8av6HSvNeBPmwwBa/4NsF3xJrlc3

       sC1eFUXXyuhS5MSNrliHTrTdCte222nnv5hxAneu63eUZm2WqCP3fgE0pPOSalQq

       UPgbNYu3b3TTgUTSC6pgogaIpzcoZySVyKFqkFzN6q+gemBDDcCjYl8CgYEAwlSK

       3R8IfDqcqw/GS3CZ9I7TkpYOQqMifa5DPe1W1hu2HVIPCzBq9y0EYSxi4ndFyR+c

       p0aFsVW2cSgKteEAR+V4LqV3yYWkbg+tCZVyf13adRPkI1u5L1qmwraIDJIr95AQ

       20tEopAyR0SP/8nORcff+9q2XRT8ERcshphCC2kCgYEAoNHjeqLA1+E5r8o9NHK0

       DqLWJ/2w1IUEmvuGGWtnL4BefU4AAYZqQunyoae0TJokP8ZL0DuJ1JcTV19Osve9

       UIkpslbGGOYMtauYCkd+rjas6W8Dw/l9EX8T0jAfS0Hl5yC0/xM3B9Ebs5u1U4Tl

       0krHHTbF+pg1lqxA7sKVPIECgYBo5BUoEU4VL9XMh3Ey2w5ecJFGd/Quh7tgNyVY

       UbkjTEXaQaaZFYNG82d/w+OD9XkXfBakO26CL4+QOFq/nTj3laZvFyU3Awmj1pZB

       rAbnNJNrylbDtwiXxMhqJPf+QQ+2Sm6uz0u2qzpYOWu4VwcdpysA2CbCy0bbOrTv

       2VMcsQKBgQCsQZvnf7LCnimR/RVUn3xthXS8BYFGJYYocSIb1YkdwVnOORNKM9Gh

       F16S46rkFtTCeB8NJ6Dq2OvAzxtSCC98pZiKGiYDGafoblFP+8T0rlhotnMc6m7i

       C+DUgBaV9Wc7fNspCUW0piEMww1v2Whc+k3gJuM3iUL3PYdSwqK7qg==

       —–END RSA PRIVATE KEY—–

     verification_key: |

       —–BEGIN PUBLIC KEY—–

       AAAAB3NzaC1yc2EAAAADAQABAAABAQCr9DqNRH9WSjdXFP1FZogZCKPxhIceXb4e

       hi2rCfXQkP3qVCykINjaIh8GtFrbenUaMX0XYEm0HZ0mPGspXJOtB4j+U0OHGPDw

       +K3HdBiwMMG2Zarh23vj4E8n29JAorR9cpMv7Dj1bUuvi5KeE//Al8cIJtPMIMbw

       +2jkwsQzEQebulKiirEISlBjCTFfiNDVE1MAzFqtGLVRmC3cpRMOwOGjCoOy39FV

       n3eGGd7MsqeWgLieMtaCQblBUSazUWfhaTU8L46ccRktcxtKmc16phzvI6xlIxzC

       XF+9DPUuXF+79pHiWVnE9tTMhm/hNkmO9KmbT86CvthAlizMu233

       —–END PUBLIC KEY—–

f) Deploy the uploaded Cloud Foundry release

$ bosh deploy

Note: bosh deploy can take 2-3 hours to complete.

g) Test Cloud Foundry installation

$ curl api.subdomain.domain/info

If curl succeeds, it should return the JSON-formatted information. If curl does not succeeds, check your networking and make sure your domain has an NS record for your subdomain.

 

Leave a Reply

You must be logged in to post a comment.

}