Advanced Search

Using the searching Keywords ….

filetype:
intitle:
inurl:
operator search:
site:

If you want to more experimentation in art of googling then :

http://www.exploit-db.com/

http://www.hackersforcharity.org/ghdb/

Tools for art of googling ..

Google hack
Search Diggity

Thank You.

Deployment Types :

Distributed deployment

Management >> Connected to >> Firewall >>> outside network

Normal setup

Management setup : set ip : 192.168.1.2 connected to VMNET0
Physical host : Connected to VMNET0 : 192.168.1.10
firewall has two interface
eth0 : 192.168.1.3 connected to VMNET0
eth1 : 202.54.34.1 connected to VMNET1

outside windows server connected to VMNET1 : 202.54.34.10

install GAIA on Management & Firewall and up each for it’s roles and set firewall Key

after setup finish you add policy to allow ping from outside/inside & inside/outside network

2- integrated LDAP auth Server in Checkpoint users

1- install LDAP on windows Server and get
DomainName : windows.2003
UserName : Administrator
password : *********
IPv4 : 202.54.34.10

2- Smart Console setup

a- enable user-directory from Global properties
b- create Node > Host > Windows Server
c- Servers & OPSEC > Servers > Add LDAP Account Unit
Name : LDAP
profile : MS_AD
Domain : windows.2003
enable : CRL retrieval , User Management , Active Directory Query
Server Add
Host : Windows_Server
port : 389
user : Administrator
LoginDN : cn=Administrator,cn=users,dc=windows,dc=2003
set outh for Checkpoint only
Object Management > Fetch > u must see profile now
d- add new ldap group group > check for new account unit
users now show and you can access with those users to checkpoint

Thank You.

Check Point is software that provide network security based on L2 , L3 & L7 network layer

1- firewalls
2- IDS/IPS
3- multiple blades

History of Check Point platform
1- IPSO unix based OS
2- SPLAT ( Secure platform ) RedHat Based
3- GAIA lasted version and support IPSO & SPLAT

Check Point provide blades enforcement and one of the advantage that using websense protocol
a- Firewalls
b- Anti Spam
c- VPN (site-to-site)
d- url2https blade
e- mobile access blade
f- anti-bot

it’s provide into two method
1- as Hardware
2- as Software

and it have 3 main advantage on the market comparing to other products of ( Cisco , juniper and Nokia )

1- Check Point come into two forms
a- Software based .. that you can install it on your server and use it as gateway or into your cloud infrastructure
b- hardware based … and it’s provide many modules depend on your traffic

2- Smart Management : you can create policy from the client and install it to any GW and distribute it or save it any where otherwise in Cisco you must configure each appliance one time and only option to import/export config manually but in CP it’s like cloud on this area

3- checkpoint software is patent stateful technology ..

Firewall Types :

1- stateless firewall it’s use ACL allow/deny
a- work on L3 & L4 network layer
b- check each packet one by one ( make delay on the network )
c- stateless don’t have any information about established connections

2- statefull inspection firewall
a- have information about all connections saved into dynamic state table
b- only check the 1st packet and if allowed then allow all packets
c- check packets between L2 & L3 with inspection engine

3- Application Blade firewall
a- have information about all connections saved into dynamic state table
b- only check the 1st packet and if allowed then allow all packets
c- check packets between L2 & L3 with inspection engine
e- can apply rules on L7 app layer example ( IPS & IDS )

Check Point Smart = Security management Architect

1- Smart Client
a- web UI allow you to check and configure most of firewall systems
b- it’s connected to the SM to configure and apply policy via smart console

2- Smart Management
a- have all information about all firewalls GW
b- connected to the FW GW

3- Firewall gateways have the Enforcement ^_^

Type of deployment :

1- Standalone deployment it’s have the SM and SG on the same box CP series is 1100 and 800
2- Distributed deployment that install SM on box and SG on other box
3- standalone high availability deployment it’s include to standalone devices working as cluster fail-over
4- multi domain deployment it’s include one SM and managing many SG via internet Cloud-based technology
5- bridge deployment SG act as L2 switch on local network

To be continued

this setup architect is

1 Dedcated Server provide ISCSI storage using tgt serices

3 Kvm VM will be node1 node2 node3 for using as cluster nodes

1 kvm install admin panel ” Luci ”

hosts :

host.id3m.net
admin.id3m.net
node{1,2,3}.id3m.net

OS : CentOS 6.5 64bit

Set Repo for all servers

On All 4 Servers :

yum localinstall http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm http://rpms.famillecollet.com/enterprise/remi-release-6.rpm http://ovirt.org/releases/ovirt-release.noarch.rpm

let’s start with dedicated server to to perpar ISCSI storage :

On Host Server :
 pvcreate /dev/sda3 # Create Phyical Volume
 vgcreate iscsi /dev/sda3 # Create Voulme Group
 lvcreate -L 10G -n shared iscsi # Create Logical Volume
 yum -y install scsi-target-utils # install ISCSI services
# set ISCSI share
 vim /etc/tgt/targets.conf
 backing-store /dev/iscsi/shared
 incominguser admin admin
:wq
/etc/init.d/tgtd restart ; chkconfig tgtd on# start services

now we finish and start to install iscsi client on all nodes so we can discover storage.

# to be done on node1 , node2 , node3 only :
yum install iscsi-ini* -y # install iscsi packages
 iscsiadm -m discovery -t st -p 5.9.90.20 # discover share
 iscsiadm -m node --target iqn.2010-09.net.id3m.node.gluster -l # login to ISCSI
 chkconfig iscsid on ; chkconfig iscsi on # auto start set
 fdisk -l # check new drive u can alose see /var/log/messages

now we have storage on all services under /dev/sda

the shared storage Disk is /dev/sda

now will start to install Ricci # cluster client on all nodes 1,2,3 and “luci” admin panel on luci.id3m.net

# this commands to be run on all nodes 1 , 2 , 3

yum install ricci ccs -y # install Ricci
 /etc/init.d/ricci restart; chkconfig ricci on # set auto start
 echo 1231234 | passwd --stdin ricci # change Ricci passwd

# now we will install “luci” admin panel on luci node only

yum install -y luci # install luci packages
 /etc/init.d/luci start ; chkconfig luci on # set auto start

congratulations now everything is installed let’s go into web panel to create cluseter

login : https://luci.id3m.net:8084
username : root
password : root_passwd

Manage cluster >> Create :

Cluster Name : set your cluster name

Add 3 Nodes with same passwd : 1231234

Check all Those options :
Download Packages # the easy way to make cluster config it will download and install all cluster services
Reboot Nodes Before Joining Cluster # to confrim that eveything is ok
Enable Shared Storage Support # because we using SharedStorage /dev/sda

######### Create Cluster #########

it will show you #

Creating node "node2.id3m.net" for cluster "cluster01": installing packages
 Creating node "node3.id3m.net" for cluster "cluster01": installing packages
 Creating node "node1.id3m.net" for cluster "cluster01": installing packages

just wait…..after 1 or 2 mints all nodes 1,2,3 shoud reboot : The system is going down for reboot NOW!

then Luci will show :

Unable to retrieve the status for batch id 678872664 on node2.id3m.net. The node may be temporarily unreachable
 Unable to retrieve the status for batch id 938297290 on node3.id3m.net. The node may be temporarily unreachable
 Unable to retrieve the status for batch id 1889887158 on node1.id3m.net. The node may be temporarily unreachable

don’t worry just more wait……….

now it show show you that eveything is Ok and Cluster is running…..

to be continued

Centos 6 64bit ( rpmforge , epel & remi )

 wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
 wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
 wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
 rpm -Uvh remi-release-6*.rpm epel-release-6*.rpm rpmforge-*.rpm
 yum -y update

create 2 partitions on your disks with the same size
create Raid0 software array
create vg

[root@ns235842 ~]# fdisk /dev/sdb
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
 switch off the mode (command 'c') and change display units to
 sectors (command 'u').
Command (m for help): p
Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe9379e9e
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
 e extended
 p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (1-19457, default 1): 2615
Last cylinder, +cylinders or +size{K,M,G} (2615-19457, default 19457):
Using default value 19457
Command (m for help): t
Selected partition 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@ns235842 ~]# partx -a /dev/sdb

create partiton then change it’s type to fd ( Raid Software )

mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
pvcreate /dev/md0
vgcreate myvg /dev/md0

done

Hello

 

In this topic i’ll do install openstack in 2 ways

1- with packstak .it’s tool install and configure all openstack services

2- step by step this will install each openstack services from keystone to dashbored with manual configure

 

let’s start

Centos 6.5
OpenStack grizzly

frist we need to create repo in our system to get the packages .

create .repo file in /etc/yum.repo.d/

[epel-openstack-grizzly]
name=OpenStack Grizzly Repository for EPEL 6
baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6
enabled=1
skip_if_unavailable=1
gpgcheck=0

then we need to verfiy the new packages

yum repolist
 yum makecache

then now install packstack tool

yum install openstack-packstack

create ssh key for your host .. we will it later

ssh-keygen

genrate answer file that for auto install all openstack services

packstack --gen-answer-file answers.txt
vim answers.txt

edit what you want nothing is required

now start install

packstack --answer-file = answers.txt

enter you root password…

wait for install to finish.
Welcome to Installer setup utility
Installing:
 Clean Up... [ DONE ]
 Adding pre install manifest entries... [ DONE ]
 Setting up ssh keys...root@192.168.1.10's password:
 [ DONE ]
 Adding MySQL manifest entries... [ DONE ]
 Adding QPID manifest entries... [ DONE ]
 Adding Keystone manifest entries... [ DONE ]
 Adding Glance Keystone manifest entries... [ DONE ]
 Adding Glance manifest entries... [ DONE ]
 Adding Cinder Keystone manifest entries... [ DONE ]
 Installing dependencies for Cinder... [ DONE ]
 Checking if the Cinder server has a cinder-volumes vg...[ DONE ]
 Adding Cinder manifest entries... [ DONE ]
 Adding Nova API manifest entries... [ DONE ]
 Adding Nova Keystone manifest entries... [ DONE ]
 Adding Nova Cert manifest entries... [ DONE ]
 Adding Nova Conductor manifest entries... [ DONE ]
 Adding Nova Compute manifest entries... [ DONE ]
 Adding Nova Scheduler manifest entries... [ DONE ]
 Adding Nova VNC Proxy manifest entries... [ DONE ]
 Adding Nova Common manifest entries... [ DONE ]
 Adding Openstack Network-related Nova manifest entries...[ DONE ]
 Adding Quantum API manifest entries... [ DONE ]
 Adding Quantum Keystone manifest entries... [ DONE ]
 Adding Quantum L3 manifest entries... [ DONE ]
 Adding Quantum L2 Agent manifest entries... [ DONE ]
 Adding Quantum DHCP Agent manifest entries... [ DONE ]
 Adding Quantum Metadata Agent manifest entries... [ DONE ]
 Adding OpenStack Client manifest entries... [ DONE ]
 Adding Horizon manifest entries... [ DONE ]
 Preparing servers... [ DONE ]
 Adding post install manifest entries... [ DONE ]
 Installing Dependencies... [ DONE ]
 Copying Puppet modules and manifests... [ DONE ]
 Applying Puppet manifests...
 Applying 192.168.1.10_prescript.pp
 192.168.1.10_prescript.pp : [ DONE ]
 Applying 192.168.1.10_mysql.pp
 Applying 192.168.1.10_qpid.pp
 192.168.1.10_mysql.pp : [ DONE ]
 192.168.1.10_qpid.pp : [ DONE ]
 Applying 192.168.1.10_keystone.pp
 Applying 192.168.1.10_glance.pp
 Applying 192.168.1.10_cinder.pp
 192.168.1.10_keystone.pp : [ DONE ]
 192.168.1.10_glance.pp : [ DONE ]
 192.168.1.10_cinder.pp : [ DONE ]
 Applying 192.168.1.10_api_nova.pp
 192.168.1.10_api_nova.pp : [ DONE ]
 Applying 192.168.1.10_nova.pp
 192.168.1.10_nova.pp : [ DONE ]
 Applying 192.168.1.10_quantum.pp
 192.168.1.10_quantum.pp : [ DONE ]
 Applying 192.168.1.10_osclient.pp
 Applying 192.168.1.10_horizon.pp
 192.168.1.10_osclient.pp : [ DONE ]
 192.168.1.10_horizon.pp : [ DONE ]
 Applying 192.168.1.10_postscript.pp
 192.168.1.10_postscript.pp : [ DONE ]
 [ DONE ]
 Finalizing... [ DONE ]
**** Installation completed successfully ******
Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * To use the command line tools you need to source the file /root/keystonerc_admin created on 192.168.1.10
 * NOTE : A certificate was generated to be used for ssl, You should change the ssl certificate configured in /etc/httpd/conf.d/ssl.conf on 192.168.1.10 to use a CA signed cert.
 * To use the console, browse to https://192.168.1.10/dashboard
 * The RDO kernel that includes network namespace (netns) support has been installed on host 192.168.1.10.
 * The installation log file is available at: /var/tmp/packstack/20140304-132346-Qt_Nyv/openstack-setup.log
[root@ns235842 ~]#

now you can use

https://192.168.1.10/dashboard and start playing with Cloud

next will be install manual.

Thank You.
Zaky