cloud engines

1. Hypervisor and Container

Docker. Io – an open-source engine for building, packing and running any application as a lightweight container, built upon the LXC container mechanism included in the Linux kernel. It was written by dotCloud and released in 2013.

KVM – a lightweight hypervisor that was accepted into the Linux kernel in February 2007. It was originally developed by Qumranet, a startup that was acquired by Red Hat in 2008.

Xen Project – a cross-platform software hypervisor that runs on platforms such as BSD, Linux and Solaris. Xen was originally written at the University of Cambridge by a team led by Ian Pratt and is now a Linux Foundation Collaborative Project.

CoreOS – a new Linux distribution that uses containers to help manage massive server deployments. Its beta version was released in May 2014.

2. Infrastructure as a Service

Apache CloudStack – an open source IaaS platform with Amazon Web Services (AWS) compatibility. CloudStack was originally created by Cloud.com (formerly known as VMOps), a startup that was purchased by Citrix in 2011. In April of 2012, CloudStack was donated by Citrix to the Apache Software Foundation.

Eucalyptus - an open-source IaaS platform for building AWS-compatible private and hybrid clouds. It began as a research project at UC Santa Barbara and was commercialized in January 2009 under the name Eucalyptus Systems.

OpenNebula – an open-source IaaS platform for building and managing virtualized enterprise data centers and private clouds. It began as a research project in 2005 authored by Ignacio M. Llorente and Rubén S. Montero. Publicly released in 2008, development today is via the open source model.

OpenStack – an open source IaaS platform, covering compute, storage and networking. In July of 2010, NASA and Rackspace joined forces to create the OpenStack project, with a goal of allowing any organization to build a public or private cloud using using the same technology as top cloud providers.

3. Platform as a Service

CloudFoundry – an open Platform-as-a-Service, providing a choice of clouds, developer frameworks and application services. VMware announced Cloud Foundry in April 2011 and built a partner ecosystem.

OpenShift - Red Hat’s Platform-as-a-Service offering. OpenShift is a cloud application platform where application developers and teams can build, test, deploy, and run their applications in a cloud environment. The OpenShift technology came from Red Hat’s 2010 acquisition of start-up Makara (founded in May 2008). OpenShift was announced in May 2011 and open-sourced in April 2012.

4. Provisioning and Management Tool

Ansible – an automation engine for deploying systems and applications.

Apache Mesos – a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It was created at the University of California at Berkeley’s AMPLab and became an Apache Foundation top level project in 2013.

Chef – a configuration-management tool, controlled using an extension of Ruby. Released by Opscode in January 2009.

Juju - a service orchestration management tool released by Canonical as Ensemble in 2011 and then renamed later that year.

Ovirt - provides a feature-rich management system for virtualized servers with advanced capabilities for hosts and guests. Red Hat first announced oVirt as part of its emerging-technology initiative in 2008, then re-launched the project in late 2011 as part of the Open Virtualization Alliance.

Puppet – IT automation software that helps system administrators manage infrastructure throughout its lifecycle. Founded by Luke Kanies in 2005.

Salt – a configuration management tool focused on speed and incorporating orchestration features. Salt was written by Thomas S Hatch and first released in 2011.

Vagrant – an open source tool for building and managing development environments, often within virtual machines. Written in 2010 by Mitchell Hashimoto and John Bender.

5. Storage

Camlistore – a set of open source formats, protocols, and software for modeling, storing, searching, sharing and synchronizing data. First released by Google developers in 2013.

Ceph – a distributed object store and file system. It was originally created by Sage Weil for a doctoral dissertation. After Weil’s graduation in 2007, he continued working on it full-time at DreamHost as the development team grew. In 2012, Weil and others formed Inktank to deliver professional services and support. It was acquired by Red Hat in 2014.

Gluster - a scale-out, distributed file system. It is developed by the Gluster community, a global community of users, developers and other contributors. GlusterFS was originally developed by Gluster Inc., then acquired by Red Hat in October 2011.

Riak CS – an open source storage system built on top of the Riak key-value store. Riak CS was originally developed by Basho and launched in 2012, with the source subsequently released in 2013.

Swift – is a highly available, distributed object store system, ideal for unstructured data. Developed as part of the OpenStack project.

 

source

clustered apache with Conga

this setup architect is

1 Dedcated Server provide ISCSI storage using tgt serices

3 Kvm VM will be node1 node2 node3 for using as cluster nodes

1 kvm install admin panel ” Luci ”

hosts :

host.id3m.net
admin.id3m.net
node{1,2,3}.id3m.net

OS : CentOS 6.5 64bit

Set Repo for all servers

On All 4 Servers :

yum localinstall http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm http://rpms.famillecollet.com/enterprise/remi-release-6.rpm http://ovirt.org/releases/ovirt-release.noarch.rpm

let’s start with dedicated server to to perpar ISCSI storage :

On Host Server :
 pvcreate /dev/sda3 # Create Phyical Volume
 vgcreate iscsi /dev/sda3 # Create Voulme Group
 lvcreate -L 10G -n shared iscsi # Create Logical Volume
 yum -y install scsi-target-utils # install ISCSI services
# set ISCSI share
 vim /etc/tgt/targets.conf
 backing-store /dev/iscsi/shared
 incominguser admin admin
:wq
/etc/init.d/tgtd restart ; chkconfig tgtd on# start services

now we finish and start to install iscsi client on all nodes so we can discover storage.

# to be done on node1 , node2 , node3 only :
yum install iscsi-ini* -y # install iscsi packages
 iscsiadm -m discovery -t st -p 5.9.90.20 # discover share
 iscsiadm -m node --target iqn.2010-09.net.id3m.node.gluster -l # login to ISCSI
 chkconfig iscsid on ; chkconfig iscsi on # auto start set
 fdisk -l # check new drive u can alose see /var/log/messages

now we have storage on all services under /dev/sda

the shared storage Disk is /dev/sda

now will start to install Ricci # cluster client on all nodes 1,2,3 and “luci” admin panel on luci.id3m.net

# this commands to be run on all nodes 1 , 2 , 3

yum install ricci ccs -y # install Ricci
 /etc/init.d/ricci restart; chkconfig ricci on # set auto start
 echo 1231234 | passwd --stdin ricci # change Ricci passwd

# now we will install “luci” admin panel on luci node only

yum install -y luci # install luci packages
 /etc/init.d/luci start ; chkconfig luci on # set auto start

congratulations now everything is installed let’s go into web panel to create cluseter

login : https://luci.id3m.net:8084
username : root
password : root_passwd

Manage cluster >> Create :

Cluster Name : set your cluster name

Add 3 Nodes with same passwd : 1231234

Check all Those options :
Download Packages # the easy way to make cluster config it will download and install all cluster services
Reboot Nodes Before Joining Cluster # to confrim that eveything is ok
Enable Shared Storage Support # because we using SharedStorage /dev/sda

######### Create Cluster #########

it will show you #

Creating node "node2.id3m.net" for cluster "cluster01": installing packages
 Creating node "node3.id3m.net" for cluster "cluster01": installing packages
 Creating node "node1.id3m.net" for cluster "cluster01": installing packages

just wait…..after 1 or 2 mints all nodes 1,2,3 shoud reboot : The system is going down for reboot NOW!

then Luci will show :

Unable to retrieve the status for batch id 678872664 on node2.id3m.net. The node may be temporarily unreachable
 Unable to retrieve the status for batch id 938297290 on node3.id3m.net. The node may be temporarily unreachable
 Unable to retrieve the status for batch id 1889887158 on node1.id3m.net. The node may be temporarily unreachable

don’t worry just more wait……….

now it show show you that eveything is Ok and Cluster is running…..

to be continued

virtualizor kvm install

wget -N http://files.virtualizor.com/install.sh
chmod 0755 install.sh
./install.sh email=dummy@id3m.net kernel=kvm lvg=vg0

http://www.virtualizor.com/wiki/Install_KVM

centos repos

Centos 6 64bit ( rpmforge , epel & remi )

 wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
 wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
 wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
 rpm -Uvh remi-release-6*.rpm epel-release-6*.rpm rpmforge-*.rpm
 yum -y update

create lvm over Raid0

create 2 partitions on your disks with the same size
create Raid0 software array
create vg

[root@ns235842 ~]# fdisk /dev/sdb
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
 switch off the mode (command 'c') and change display units to
 sectors (command 'u').
Command (m for help): p
Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe9379e9e
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
 e extended
 p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (1-19457, default 1): 2615
Last cylinder, +cylinders or +size{K,M,G} (2615-19457, default 19457):
Using default value 19457
Command (m for help): t
Selected partition 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@ns235842 ~]# partx -a /dev/sdb

create partiton then change it’s type to fd ( Raid Software )

mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
pvcreate /dev/md0
vgcreate myvg /dev/md0

done

Howto install openstack Centos

Hello

 

In this topic i’ll do install openstack in 2 ways

1- with packstak .it’s tool install and configure all openstack services

2- step by step this will install each openstack services from keystone to dashbored with manual configure

 

let’s start

Centos 6.5
OpenStack grizzly

frist we need to create repo in our system to get the packages .

create .repo file in /etc/yum.repo.d/

[epel-openstack-grizzly]
name=OpenStack Grizzly Repository for EPEL 6
baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6
enabled=1
skip_if_unavailable=1
gpgcheck=0

then we need to verfiy the new packages

yum repolist
 yum makecache

then now install packstack tool

yum install openstack-packstack

create ssh key for your host .. we will it later

ssh-keygen

genrate answer file that for auto install all openstack services

packstack --gen-answer-file answers.txt
vim answers.txt

edit what you want nothing is required

now start install

packstack --answer-file = answers.txt

enter you root password…

wait for install to finish.
Welcome to Installer setup utility
Installing:
 Clean Up... [ DONE ]
 Adding pre install manifest entries... [ DONE ]
 Setting up ssh keys...root@192.168.1.10's password:
 [ DONE ]
 Adding MySQL manifest entries... [ DONE ]
 Adding QPID manifest entries... [ DONE ]
 Adding Keystone manifest entries... [ DONE ]
 Adding Glance Keystone manifest entries... [ DONE ]
 Adding Glance manifest entries... [ DONE ]
 Adding Cinder Keystone manifest entries... [ DONE ]
 Installing dependencies for Cinder... [ DONE ]
 Checking if the Cinder server has a cinder-volumes vg...[ DONE ]
 Adding Cinder manifest entries... [ DONE ]
 Adding Nova API manifest entries... [ DONE ]
 Adding Nova Keystone manifest entries... [ DONE ]
 Adding Nova Cert manifest entries... [ DONE ]
 Adding Nova Conductor manifest entries... [ DONE ]
 Adding Nova Compute manifest entries... [ DONE ]
 Adding Nova Scheduler manifest entries... [ DONE ]
 Adding Nova VNC Proxy manifest entries... [ DONE ]
 Adding Nova Common manifest entries... [ DONE ]
 Adding Openstack Network-related Nova manifest entries...[ DONE ]
 Adding Quantum API manifest entries... [ DONE ]
 Adding Quantum Keystone manifest entries... [ DONE ]
 Adding Quantum L3 manifest entries... [ DONE ]
 Adding Quantum L2 Agent manifest entries... [ DONE ]
 Adding Quantum DHCP Agent manifest entries... [ DONE ]
 Adding Quantum Metadata Agent manifest entries... [ DONE ]
 Adding OpenStack Client manifest entries... [ DONE ]
 Adding Horizon manifest entries... [ DONE ]
 Preparing servers... [ DONE ]
 Adding post install manifest entries... [ DONE ]
 Installing Dependencies... [ DONE ]
 Copying Puppet modules and manifests... [ DONE ]
 Applying Puppet manifests...
 Applying 192.168.1.10_prescript.pp
 192.168.1.10_prescript.pp : [ DONE ]
 Applying 192.168.1.10_mysql.pp
 Applying 192.168.1.10_qpid.pp
 192.168.1.10_mysql.pp : [ DONE ]
 192.168.1.10_qpid.pp : [ DONE ]
 Applying 192.168.1.10_keystone.pp
 Applying 192.168.1.10_glance.pp
 Applying 192.168.1.10_cinder.pp
 192.168.1.10_keystone.pp : [ DONE ]
 192.168.1.10_glance.pp : [ DONE ]
 192.168.1.10_cinder.pp : [ DONE ]
 Applying 192.168.1.10_api_nova.pp
 192.168.1.10_api_nova.pp : [ DONE ]
 Applying 192.168.1.10_nova.pp
 192.168.1.10_nova.pp : [ DONE ]
 Applying 192.168.1.10_quantum.pp
 192.168.1.10_quantum.pp : [ DONE ]
 Applying 192.168.1.10_osclient.pp
 Applying 192.168.1.10_horizon.pp
 192.168.1.10_osclient.pp : [ DONE ]
 192.168.1.10_horizon.pp : [ DONE ]
 Applying 192.168.1.10_postscript.pp
 192.168.1.10_postscript.pp : [ DONE ]
 [ DONE ]
 Finalizing... [ DONE ]
**** Installation completed successfully ******
Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * To use the command line tools you need to source the file /root/keystonerc_admin created on 192.168.1.10
 * NOTE : A certificate was generated to be used for ssl, You should change the ssl certificate configured in /etc/httpd/conf.d/ssl.conf on 192.168.1.10 to use a CA signed cert.
 * To use the console, browse to https://192.168.1.10/dashboard
 * The RDO kernel that includes network namespace (netns) support has been installed on host 192.168.1.10.
 * The installation log file is available at: /var/tmp/packstack/20140304-132346-Qt_Nyv/openstack-setup.log
[root@ns235842 ~]#

now you can use

https://192.168.1.10/dashboard and start playing with Cloud

next will be install manual.

Thank You.
Zaky

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start blogging!