Friday, December 12, 2014

How to verify you are connecting to right host using ssh

You tried to connect to host name webserver1 and you got a confirmation message like below
The authenticity of host 'webserver1.example.com (10.17.10.16)' can't be established.
RSA key fingerprint is 74:78:10:04:95:3e:c5:d9:b3:fb:50:f2:05:9b:87:bc.
Are you sure you want to continue connecting (yes/no)?

How to verify that you are connecting to right host and man in the middle attack is not redirecting you to wrong host to get your credentials.

On the destination server run the following command

#ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key
2048 74:78:10:04:95:3e:c5:d9:b3:fb:50:f2:05:9b:87:bc /etc/ssh/ssh_host_rsa_key.pub (RSA)

If the output hash matches with the prompt hash message then you are connecting to the right host.

Tuesday, November 25, 2014

User access to database on mysql



Database creation


  • Create a database named wordpress
    mysql> create database wordpress;

User creation

  • Create a user name wprdmin who can login from remote host only
    mysql> create user 'wpadmin'@'%' identified by 'XXXX';
  • % symbols that user interconnect can connect remotely from any server. If you want to specify user can connect from particular host then use host name instead of %.
    mysql > create user 'wpadmin'@'webserver1.example.com' identified by 'XXXX';
  • Above query creates an user named wpadmin which can connect only from host webserver1.example.com

Granting user access to database

  • Allow grant select,insert,update,delete,create,drop,index,alter,references operation to user wpadmin on database wordpress from any remote host
     mysql> grant select,insert,update,delete,create,drop,index,alter,references on wordpress.* to 'wpadmin'@'%';
  • Above query allows user wpadmin select,insert,update,delete,create,drop,index,alter,references operation on wordpress database remotely
  • flush privilege to bring changes into effect
    mysql> flush privileges;

List user,host and password of all users on system

  • List user,host and password of all users on system
    mysql > select user,host,password from mysql.user;

List privilege granted to particular mysql user

  • List privilege granted to user wpadmin
    mysql > show grants for 'wpadmin'@'%';

EAPS configuration on Extreme Switches


Why EAPS?

  • The networking industry has relied on the Spanning Tree Protocol (STP) in large Layer 2 networks to provide a certain level of redundancy.
  • However, STP has proven inadequate to provide the level of resiliency required for real-time and mission critical applications.
  • Ethernet Automatic Protection Switching (EAPS) is Extreme Networks’ solution for fault-tolerant Layer 2 ring topologies.
  • EAPS is responsible for a loop-free operation and a sub-second ring recovery.
  • This revolutionary technology provides end users with a continuous operation usually only available in voice networks and does so with radical simplicity.

Configuration

  • For our example let’s take first domain e1-domain, where we have 192.168.1.1 switch as master node and all other switches as transit node

Creating and configuring EAPS Domains

  • Master node configuration
    • Create EAPS domain
      switch192-168-1-1 # create eaps e1-domain
    • Configure switch as Master node
      switch192-168-1-1 # configure e1-domain mode master
    • Configure port 1 as primary port
      switch192-168-1-1 # configure eaps e1-domain primary port 1
    • Configure port 2 as secondary port
      switch192-168-1-1 # configure eaps e1-domain secondary port 2
  • Transit node configuration
    • Create EAPS domain
      switch192-168-1-2 # create eaps e1-domain
    • Configure switch as Transit node
      switch192-168-1-2 # create e1-domain mode transit
    • Configure port 1 as primary port
      switch192-168-1-2 # configure eaps e1-domain primary port 1
    • Configure port 2 as secondary port
      switch192-168-1-2 # configure eaps e1-domain secondary port 2

Creating and configuring EAPS Control VLANs

  • Create control VLAN
    switch192-168-1-1 #create vlan control-1
  • Tag control Vlan
    switch192-168-1-1 #configure vlan control-1 tag 1
  • Add ports to control Vlan
    switch192-168-1-1 #configure vlan control-1 add ports 1,2 tagged
  • Add control Vlan to EAPS domain
    switch192-168-1-1 #configure eaps e1-domain add control vlan control-1
  • Run the above commands on all switches that are part of same ring, domain

Creating and configuring EAPS Shared Ports

  • On master node sharing common link
    • create a shared port
      switch192-168-1-1 #create eaps shared-port 1
    • on Master node, configure shared port as partner mode
      switch192-168-1-1 #configure eaps shared-port 1 mode partner
    • Give link ID to common link
      switch192-168-1-1 #configure eaps shared-port 1 link-id 200
  • On transit node sharing common link
    • create a shared port
      switch192-168-1-2 #create eaps shared-port 1
    • on transit node, configure shared port as controller mode
      switch192-168-1-2 #configure eaps shared-port 1 mode controller
    • Give link ID to common link
      switch192-168-1-2 #configure eaps shared-port 1 link-id 200

Enabling EAPS Protocol and EAPS Domain

  • Enable EAPS on all master and transit node (only certain domain)
    switch192-168-1-1 #enable eaps e1-domain
  • Enable EAPS globally on switch
    switch192-168-1-1 #enable eaps

Creating and configuring EAPS Protected VLANs

  • create protected Vlan
    switch192-168-1-1 #create vlan voice-100
  • Tag protected Vlan
    switch192-168-1-1 #configure voice-100 tag 100
  • Add protected Vlan to eaps domain
    switch192-168-1-1 #configure eaps e1-domain add protected vlan voice-100
  • Add ports to protected Vlan
    switch192-168-1-1#configure vlan voice-100 add ports 1,2 tagged

Verifying EAPS Configuration and Operating State

  • Show eaps status
    switch192-168-1-1 #show eaps
  • Show eaps status for particular domain
    switch192-168-1-1#show eaps e1-domain

Importing raw VM kvm image to Amazon AWS


Note: I am assuming that you have already configured ec2 tools, system path and environment variables

1. First create a VM host on your KVM of appropriate size (You don’t have to create a specified size that you on AWS here). Smaller the size better to upload and less you will be charged
In my case I created 8 GB size with two partition (one /boot and another /) on same volume.
AWS recommends to have / and /boot on one vol otherwise it won’t work.
2. During installation, make sure you are getting IP address dynamically.
3. Once installation is complete, disable and stop iptables
#chkconfig iptables off
#/etc/init.d/iptables stop
4. create a file named authorized_keys inside /root/.ssh directory and paste public key in the file
5. Try to login from another host
#ssh -i <key> root@<IP_ADDRESS>
if you are able to login without password then you are good proceed to another step
6.  update your OS
#yum update -y
7.  Once update is done reboot your system
8. Install all necessary packages/software you need, make necessary changes and shutdown your system
9. Now upload your VM to AWS using ec2-import-instance command
#ec2-import-instance test.img -f RAW -p Linux -t m3.xlarge -a x86_64 -b vmbucket  -s 50 GiB -o <ACCESS_KEY> -w <Secret KEY> –region us-west-2
Note:  you can upload Linux only to certain types of ec2 instance
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html
Using -s option you can specify the size of EBS vol you want to create for ec2 instance
10. You can monitor the progress using
#ec2-describe-conversion-tasks import-i-ffrs53ha -O <ACCESS_KEY> -W <SECRET_KEY> –region us-west-2
11. Once the conversion is complete, it will create a EBS vol of size and instance type you specified (It takes some time to create a vol even though ec2-describe-conversion-tasks reports process completion, so please be patient )
12. start your instance and you are good to go

Restoring slave from master server on mysql 5.6


Currently, I am working on moving one giant complicated webapp from our on-premises network to Amazon AWS Cloud. There are many reason for moving webapp from on-premise to Amazon Cloud. To mention couple are redundancy, scalability, HA. For me only the drawback of moving to Cloud is security and you don’t have complete control of Hardware and architecture.

Anyway, currently we are using mysql server 5.1, so while moving to AWS cloud, I decided to go with mysql server 5.6 because of lots of enhancement and specially because of GTID replication and mysql utilities.
After running master server successfully, while trying to replicate slave with master, it was not a easy task. Probably if I have gone on detail how mysql replication works using GTID that would have helped.
I was getting following error on my slave server.
2014-01-26 22:40:02 16119 [ERROR] Error reading packet from server: Found old binary log without GTIDs while looking for the oldest binary log that contains any GTID that is not in the given gtid set ( server_errno=1236)
2014-01-26 22:40:02 16119 [ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: ‘Found old binary log without GTIDs while looking for the oldest binary log that contains any GTID that is not in the given gtid set’, Error_code: 1236
You need to know how GTID replication process works to resolve this.
How to set up a new slave
With introduction of GTID on mysql 5.6, Binary Logs and Position are not needed anymore if you are using GTID for replication. Instead we need to know in which GTID is the master and set it on the slave. MySQL keeps two global variables with GTID numbers on it:
gtid_executed: it contains a representation of the set of all transaction logged in the binary log
gtid_purged: it contains a representation of the set of all transactions deleted from the binary log
So the process to replicate slave with master is
1.    take a backup from the master and store the value of gtid_executed
2.    restore the backup on the slave and set gtid_purged with the value of gtid_executed from the master
3.    The new mysqldump can do those tasks for us.
Let’s go through example of how to take a backup from the master and restore it on the slave to set up a new replication server.
Restore a slave
Let’s imagine that our slave server is behind the schedule of master server. This is the error we are going to get:
Slave_IO_Running:No
Slave_SQL_Running: Yes
First, we get the GTID_EXECUTED from the master:
master > show global variables like ‘GTID_EXECUTED’;
+—————+——————————————————————————–+
| Variable_name | Value                                                                              |
+—————+——————————————————————————–+
| gtid_executed | 9a511b7b-7059-11e2-9a24-08002762b8af:1-14       |
+—————+———————————————————————————+
master > show global variables like ‘GTID_EXECUTED’;
+———————————————————————————————-+
Variable_name | Value                                                                             |
+—————+—————————————————————————–+
| gtid_executed | 9a511b7b-7059-11e2-9a24-08002762b8af:1-14   |
+—————+—————————————————————————–+
And we set it on the slave:
slave1 > set global GTID_PURGED=”9a511b7b-7059-11e2-9a24-08002762b8af:1-14″;
ERROR 1840 (HY000): GTID_PURGED can only be set when GTID_EXECUTED is empty.
GTID_EXECUTED should be empty before changing GTID_PURGED manually but we can’t change it with SET because is a read only variable. The only way to change it is with reset master (yes, on a slave server):
slave1> reset master;
slave1 > show global variables like ‘GTID_EXECUTED’;
+—————+————————-+
| Variable_name | Value          |
+—————+———————–+
| gtid_executed |                    |
+—————+———————-+
slave1 > set global GTID_PURGED=”9a511b7b-7059-11e2-9a24-08002762b8af:1-14″;
slave1> start slave io_thread;
slave1> show slave status\G
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

Configure SSL cert on Amazon web service(AWS) ELB


Well, normally users create csr for certificate signing request using openssl on x509 format, and you get certificate from Certificate signing authority in x509 format, but AWS ELB it only supports pem encoded key,cert. So in this post I am going to talk about how to change key,cert to pem encoded format and configure ELB with key and cert.

1. Convert your cert to pem encoded format
#  openssl x509 -in example.com.cert -out cert.example.com.der -outform DER
# openssl x509 -in cert.example.com.der -inform DER -out cert.example.com.pem -outform PEM
Your pem encoded cert will be cert.example.com.pem. you can view it’s content using
# cat cert.example.com.pem
2. Convert you key to pem encoded format
#  openssl rsa -in example.com.key -out key.example.com.der -outform DER
# openssl rsa -in key.example.com.der -inform DER -out key.example.com.pem -outform PEM
Your pem encoded cert will be key.example.com.pem. you can view it’s content using
# cat key.example.com.cert
3. Also convert root chain certificate to pem encoded format
#  openssl x509 -in chain.example.com.cert -out chain.example.com.der -outform DER
# openssl x509 -in chain.example.com.der -inform DER -out chain.example.com.pem -outform PEM
Your pem encoded cert will be chain.example.com.pem. you can view it’s content using
# cat chain.example.com.cert
4. Now login to AWS console and go to Listeners tab on ELB page.
ELB1
5. Select https on LoadBalancer protocol and click change on SSL Certificate
6. Copy and paste content of pem encoded cert file
cert
7. Follow same step for key and certificate chain cert
8. You can leave instance protocol to http(80) so that you don’t have to configure key and cert on all of your web server. ELB takes of it.

User management using puppet


When you are hosting application on cloud like AWS, you don’t want to use your companies LDAP which contain companies privacy information because of security reason and also you don’t want to run on hardware you don’t own.

So I have to come with new way of user management on cloud and I decided to go with puppet.
We are going to manage passwordless user using key and cert through puppet.
First we will define a virtual user using puppet.
Suppose my module name is accounts.
My init.pp file should look something like following.
# init.pp file
define accounts::virtual ($uid,$realname,$pass,$sshkeytype,$sshkey) {
include accounts::params
# Pull in values from accounts::params
$homepath =  $accounts::params::homepath
$shell    =  $accounts::params::shell
# Create the user
user { $title:
ensure            =>  ‘present’,
uid               =>  $uid,
gid               =>  $title,
shell             =>  $shell,
home              =>  “${homepath}/${title}”,
comment           =>  $realname,
password          =>  $pass,
managehome        =>  true,
require           =>  Group[$title],
}
# Create a matching group
group { $title:
gid               => $uid,
}
# Ensure the home directory exists with the right permissions
file { “${homepath}/${title}”:
ensure            =>  directory,
owner             =>  $title,
group             =>  $title,
mode              =>  ’0700′,
require           =>  [ User[$title], Group[$title] ],
}
# Ensure the .ssh directory exists with the right permissions
file { “${homepath}/${title}/.ssh”:
ensure            =>  directory,
owner             =>  $title,
group             =>  $title,
mode              =>  ’0700′,
require           =>  File["${homepath}/${title}"],
}
# Ensure the .bashrc file exists with the right permissions
file { “${homepath}/${title}/.bashrc”:
ensure          => present,
mode              => 0644,
owner          => $title, group => “$title”,
source          => “puppet:///modules/accounts/.bashrc”,
require          => File["${homepath}/${title}"],
}
# Ensure the .bash_profile file exists with the right permissions
file { “${homepath}/${title}/.bash_profile”:
ensure             => present,
mode               => 0644,
owner              => $title, group => “$title”,
source             => “puppet:///modules/accounts/.bash_profile”,
require            => File["${homepath}/${title}"],
}
# Ensure the .bash_logout file exists with the right permissions
file { “${homepath}/${title}/.bash_logout”:
ensure             => present,
mode               => 0644,
owner              => $title, group => “$title”,
source             => “puppet:///modules/accounts/.bash_logout”,
require            => File["${homepath}/${title}"],
}
# Add user’s SSH key
if ($sshkey != ”) {
ssh_authorized_key {$title:
ensure          => present,
name            => $title,
user            => $title,
type            => $sshkeytype,
key             => $sshkey,
}
}
}
In above puppet config, .bashrc, .bash_logout and .bash_profile are optional if you don’t want to manage those files using puppet.
Next, if you want to add users to host name agent, then create a file named agent.pp inside manifests directory and add the required user to the file what we call on puppet is realize user.
# agent.pp file
class accounts::agent {
accounts::virtual { ‘keshab’:
uid             =>  501,
realname        =>  ‘Keshab Budhathoky’,
pass            =>  ”,
sshkeytype      =>  ‘rsa’,
sshkey          =>  ‘AAAAB3NzaC1yc2EAAAABIwAAAQEAukGeSEZJSn5GqN17oEkU95MPa+5KInJNx018LK3eeNDWhaixBJKEp9leFYZjATEMpPODt3L5whgcNuh4sNyRAQm0kEPhjtUC8n/dJK8ZJcfTVDK3gymhvzbe4LZpFOw+6l4AM8uhSzilk8Nq9bDhvmyOTGyR1NfPLjKnP9o9LWfSowRNMlU60SvLukQhqLkcqQX2ojKds+u0jT7LLZyFRjGeju6RQNHIMCX3ZVMHRfsFYIpSJuNttZAY8MBhk93ccgwCALQ0F+icQQ+jgyL3OeQ9Q7FNI/oOzUtJRNktgOZc9IqiBg6pJcIOrEWiS2iGweAQHJSgNIy/Miq234sgdf24tw34dew==’,
}
}
Add accounts class to the node agent.
# nodes.pp file
node “agent.example.com” { class { ‘accounts::agent’: } }
Run puppet agent on client
# puppet agent –test
you can use –noop option for dry run and –verbose for details output on console
Once puppet agent run is successful, you should see user added on /etc/passwd, home directory created at /home/keshab and rsa key at /home/keshab/.ssh/authorized_keys.
Once all the above process is successful, you should be able to login from any other host without password if you have private key with you.

How to provide user managed using ssh_authorized_keys sudo access without password prompt using puppet


The whole point of ssh_authorized_keys is to provide password less key/cert access to machine/server.

we talked about user management using puppet on previous post.
http://www.sysadmincloud.com/2014/02/11/user-management-using-puppet
In this post we are going to walk through how to provide user sudo access to machine without password prompt and manage it using puppet.
Basically, all the sudo access is managed using sudo file located at /etc/sudoers.
To manage sudo access, simply create module named sudo. Add files,manifests directory.
Create file named init.pp inside manifests directory and add the following content.
class sudo {
file { ‘/etc/sudoers’:
ensure => ‘file’,
mode => ’0440′,
owner => ‘root’,group => ‘root’,
source => ‘puppet:///modules/sudo/sudoers’,
}
}
create a file named sudo inside files directory.
Defaults !visiblepw
Defaults always_set_home

Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
root ALL=(ALL) ALL
## User Table ##
# Here user keshab all sudo access to ALL machine i.e. whichever machine this module will be puppetized too. NOPASSWD option will allow user sudo permission without prompting for password
keshab ALL=NOPASSWD:ALL
# Here chris has only sudo access to certain machine only
chris queue=NOPASSWD:ALL:shared=NOPASSWD:ALL
Once you are done puppet config, add module class to host node and run puppet agent on client side
# puppet agent –test –verbose
Use –noop option to simulate what will module do without actually making changes.

Thursday, November 6, 2014

Configuring HTTP Strict Transport Security on Apache

What is HSTS?


HSTS (HTTP Strict Transport Security) is a security feature that lets a web site tell browsers that it should only be communicated using HTTPS, instead of using HTTP. 

If a web site accepts a connection through HTTP and redirects to HTTPS, the user in this case may initially talk to the non-encrypted version of the site before being redirected to encrypted version. 
If, for example, the user types http://www.example.com/ or even just example.com, the initial conversation happens over http before being redirected to https.

This opens up the potential for a man-in-the-middle attack, where the redirect could be exploited to direct a user to a malicious site instead of the secure version of the original page.


The HTTP Strict Transport Security feature lets a web site inform the browser that it should never load the site using HTTP, and should automatically convert all attempts to access the site using HTTP to HTTPS requests instead.

Configuring HSTS on Apache server
1. Make sure mod_headers module is installed

LoadModule headers_module modules/mod_headers.so

2. Set the header so that every time user visits web site, expiration time is set to 2 yrs and is applied to all sub domains too

<VirtualHost 67.34.67.43:443>
    Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
</VirtualHost>

You have to set this on HTTPS VirtualHost only

2. Write a rewrite rule to redirect visitors to HTTPS.

<IfModule mod_rewrite.c>
  RewriteEngine On
  RewriteCond %{HTTPS} off
  RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
</IfModule>

3. Restart Apache server

# /etc/init.d/httpd restart

4. Confirm that change took effect


Friday, October 17, 2014

Best practices to deploy SSL/TLS key, certificate on web server

As part of my job responsibilities, I am responsible to create key, csr file and order certificate from CA. There is not lot of information on how to securely deploy key, certificate on web server for https protocol use.

One thing you really have to make sure about key and certificate is that they are owned by root user only and they are read only. I will make sure group ownership is right too. I even prefer to restrict directory permission too where cert and key are stored.

If you are person responsible for deploying key, certificate on web server especially on production environment server then I will definitely recommend to read this pdf doc on how to deploy key, certificate.

You can test your web server https configuration rating usingI do use https://www.ssllabs.com/ssltest to test my web server https configuration rating.

Above test output provides all details information about browsers support too.

https://www.ssllabs.com is great site to be up to date with all SSL/TLS information, vulnerability etc.

It's on my weekly reading list.




Wednesday, October 15, 2014

Running Private Cloud on Cloud

I started working on Cloud almost one and half years ago. Cloud has been a roller coaster ride for me. First I started with moving on-premise data center web application to AWS cloud. It has been a great experience. I have to admit that AWS is awesome.


Once done with moving on-premise web application to cloud, I have been working on setup, configuration and automation of Private Cloud for our company as well as clients on Cloud for the last 4-5 months. I would like to share my experience and work through this blog. Suggestions and feedback would be appreciated.

Basically private cloud web applications needs Apache, Tomcat, Database and storage to run.

I have created an AMI template that has Apache, Tomcat, Database (mysql) service installed as well as created, attached and configured EBS volume needed for apache,  tomcat database and storage.

AWS ephemeral storage are free and great for use for temporary storage, so I use ephemeral storage for /tmp.  Ephemeral, by default only supports ext3 filesystem and /tmp need to have permission of 1777 for my private cloud web application use. I am mounting ephemeral as /tmp directory as boot time using fstab. I am also using boot service to set /tmp permission to 1777.

This is a Base AMI for Private Cloud. I am using Vagrant to fire up new instance from AMI template. I am also using shell and puppet to provision instance to make it ready for web application.

I am using shell provision to update OS, install necessary repository and of course puppet repo and package for instance provisioning.

I am doing all apache, tomcat, database, storage configuration using puppet. I am also doing all the necessary setup and configuration for web application using puppet like creating all directory, rsync configuration, database optimization, apache VH configuration, tuning, setting database root password, creating database. creating users and permissions.

I am using flywaydb for web application's database version control. So once instance is provisioned and up and running, I do initialize database with flywaydb and run database migration which will migrate all tables and data needed for web application to run.

When migration is done, I deploy web application and restart tomcat. Once tomcat restart is done, private cloud is ready for use.

I still have lots of things to get done in terms of automation and cloud.

I use Packer to convert AMI template to different format so that it can be used for KVM, VMware, Oracle Box, Docker etc.