You tried to connect to host name webserver1 and you got a confirmation message like below
The authenticity of host 'webserver1.example.com (10.17.10.16)' can't be established.
RSA key fingerprint is 74:78:10:04:95:3e:c5:d9:b3:fb:50:f2:05:9b:87:bc.
Are you sure you want to continue connecting (yes/no)?
How to verify that you are connecting to right host and man in the middle attack is not redirecting you to wrong host to get your credentials.
On the destination server run the following command
#ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key
2048 74:78:10:04:95:3e:c5:d9:b3:fb:50:f2:05:9b:87:bc /etc/ssh/ssh_host_rsa_key.pub (RSA)
If the output hash matches with the prompt hash message then you are connecting to the right host.
Keshab's Blog
Friday, December 12, 2014
Tuesday, November 25, 2014
User access to database on mysql
Database creation
- Create a database named wordpress
mysql> create database wordpress;
User creation
- Create a user name wprdmin who can login from remote host only
mysql> create user 'wpadmin'@'%' identified by 'XXXX';
- % symbols that user interconnect can connect remotely from any server. If you want to specify user can connect from particular host then use host name instead of %.
mysql > create user 'wpadmin'@'webserver1.example.com' identified by 'XXXX';
- Above query creates an user named wpadmin which can connect only from host webserver1.example.com
Granting user access to database
- Allow grant select,insert,update,delete,create,drop,index,alter,references operation to user wpadmin on database wordpress from any remote host
mysql> grant select,insert,update,delete,create,drop,index,alter,references on wordpress.* to 'wpadmin'@'%';
- Above query allows user wpadmin select,insert,update,delete,create,drop,index,alter,references operation on wordpress database remotely
- flush privilege to bring changes into effect
mysql> flush privileges;
List user,host and password of all users on system
- List user,host and password of all users on system
mysql > select user,host,password from mysql.user;
List privilege granted to particular mysql user
- List privilege granted to user wpadmin
mysql > show grants for 'wpadmin'@'%';
EAPS configuration on Extreme Switches
Why EAPS?
- The networking industry has relied on the Spanning Tree Protocol (STP) in large Layer 2 networks to provide a certain level of redundancy.
- However, STP has proven inadequate to provide the level of resiliency required for real-time and mission critical applications.
- Ethernet Automatic Protection Switching (EAPS) is Extreme Networks’ solution for fault-tolerant Layer 2 ring topologies.
- EAPS is responsible for a loop-free operation and a sub-second ring recovery.
- This revolutionary technology provides end users with a continuous operation usually only available in voice networks and does so with radical simplicity.
Configuration
- For our example let’s take first domain e1-domain, where we have 192.168.1.1 switch as master node and all other switches as transit node
Creating and configuring EAPS Domains
- Master node configuration
- Create EAPS domain
switch192-168-1-1 # create eaps e1-domain
- Configure switch as Master node
switch192-168-1-1 # configure e1-domain mode master
- Configure port 1 as primary port
switch192-168-1-1 # configure eaps e1-domain primary port 1
- Configure port 2 as secondary port
switch192-168-1-1 # configure eaps e1-domain secondary port 2
- Create EAPS domain
- Transit node configuration
- Create EAPS domain
switch192-168-1-2 # create eaps e1-domain
- Configure switch as Transit node
switch192-168-1-2 # create e1-domain mode transit
- Configure port 1 as primary port
switch192-168-1-2 # configure eaps e1-domain primary port 1
- Configure port 2 as secondary port
switch192-168-1-2 # configure eaps e1-domain secondary port 2
- Create EAPS domain
Creating and configuring EAPS Control VLANs
- Create control VLAN
switch192-168-1-1 #create vlan control-1
- Tag control Vlan
switch192-168-1-1 #configure vlan control-1 tag 1
- Add ports to control Vlan
switch192-168-1-1 #configure vlan control-1 add ports 1,2 tagged
- Add control Vlan to EAPS domain
switch192-168-1-1 #configure eaps e1-domain add control vlan control-1
- Run the above commands on all switches that are part of same ring, domain
Creating and configuring EAPS Shared Ports
- On master node sharing common link
- create a shared port
switch192-168-1-1 #create eaps shared-port 1
- on Master node, configure shared port as partner mode
switch192-168-1-1 #configure eaps shared-port 1 mode partner
- Give link ID to common link
switch192-168-1-1 #configure eaps shared-port 1 link-id 200
- create a shared port
- On transit node sharing common link
- create a shared port
switch192-168-1-2 #create eaps shared-port 1
- on transit node, configure shared port as controller mode
switch192-168-1-2 #configure eaps shared-port 1 mode controller
- Give link ID to common link
switch192-168-1-2 #configure eaps shared-port 1 link-id 200
- create a shared port
Enabling EAPS Protocol and EAPS Domain
- Enable EAPS on all master and transit node (only certain domain)
switch192-168-1-1 #enable eaps e1-domain
- Enable EAPS globally on switch
switch192-168-1-1 #enable eaps
Creating and configuring EAPS Protected VLANs
- create protected Vlan
switch192-168-1-1 #create vlan voice-100
- Tag protected Vlan
switch192-168-1-1 #configure voice-100 tag 100
- Add protected Vlan to eaps domain
switch192-168-1-1 #configure eaps e1-domain add protected vlan voice-100
- Add ports to protected Vlan
switch192-168-1-1#configure vlan voice-100 add ports 1,2 tagged
Verifying EAPS Configuration and Operating State
- Show eaps status
switch192-168-1-1 #show eaps
- Show eaps status for particular domain
switch192-168-1-1#show eaps e1-domain
Importing raw VM kvm image to Amazon AWS
Note: I am assuming that you have already configured ec2 tools, system path and environment variables
1. First create a VM host on your KVM of appropriate size (You don’t have to create a specified size that you on AWS here). Smaller the size better to upload and less you will be charged
In my case I created 8 GB size with two partition (one /boot and another /) on same volume.
AWS recommends to have / and /boot on one vol otherwise it won’t work.
2. During installation, make sure you are getting IP address dynamically.
3. Once installation is complete, disable and stop iptables
#chkconfig iptables off
#/etc/init.d/iptables stop
4. create a file named authorized_keys inside /root/.ssh directory and paste public key in the file
5. Try to login from another host
#ssh -i <key> root@<IP_ADDRESS>
if you are able to login without password then you are good proceed to another step
6. update your OS
#yum update -y
7. Once update is done reboot your system
8. Install all necessary packages/software you need, make necessary changes and shutdown your system
9. Now upload your VM to AWS using ec2-import-instance command
#ec2-import-instance test.img -f RAW -p Linux -t m3.xlarge -a x86_64 -b vmbucket -s 50 GiB -o <ACCESS_KEY> -w <Secret KEY> –region us-west-2
Note: you can upload Linux only to certain types of ec2 instance
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html
Using -s option you can specify the size of EBS vol you want to create for ec2 instance
10. You can monitor the progress using
#ec2-describe-conversion-tasks import-i-ffrs53ha -O <ACCESS_KEY> -W <SECRET_KEY> –region us-west-2
11. Once the conversion is complete, it will create a EBS vol of size and instance type you specified (It takes some time to create a vol even though ec2-describe-conversion-tasks reports process completion, so please be patient )
12. start your instance and you are good to go
Restoring slave from master server on mysql 5.6
Currently, I am working on moving one giant complicated webapp from our on-premises network to Amazon AWS Cloud. There are many reason for moving webapp from on-premise to Amazon Cloud. To mention couple are redundancy, scalability, HA. For me only the drawback of moving to Cloud is security and you don’t have complete control of Hardware and architecture.
Anyway, currently we are using mysql server 5.1, so while moving to AWS cloud, I decided to go with mysql server 5.6 because of lots of enhancement and specially because of GTID replication and mysql utilities.
After running master server successfully, while trying to replicate slave with master, it was not a easy task. Probably if I have gone on detail how mysql replication works using GTID that would have helped.
I was getting following error on my slave server.
2014-01-26 22:40:02 16119 [ERROR] Error reading packet from server: Found old binary log without GTIDs while looking for the oldest binary log that contains any GTID that is not in the given gtid set ( server_errno=1236) 2014-01-26 22:40:02 16119 [ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: ‘Found old binary log without GTIDs while looking for the oldest binary log that contains any GTID that is not in the given gtid set’, Error_code: 1236
You need to know how GTID replication process works to resolve this.
How to set up a new slave
With introduction of GTID on mysql 5.6, Binary Logs and Position are not needed anymore if you are using GTID for replication. Instead we need to know in which GTID is the master and set it on the slave. MySQL keeps two global variables with GTID numbers on it:
gtid_executed: it contains a representation of the set of all transaction logged in the binary log gtid_purged: it contains a representation of the set of all transactions deleted from the binary log
So the process to replicate slave with master is
1. take a backup from the master and store the value of gtid_executed
2. restore the backup on the slave and set gtid_purged with the value of gtid_executed from the master
3. The new mysqldump can do those tasks for us.
Let’s go through example of how to take a backup from the master and restore it on the slave to set up a new replication server.
Restore a slave
Let’s imagine that our slave server is behind the schedule of master server. This is the error we are going to get:
Slave_IO_Running:No
Slave_SQL_Running: Yes
After running master server successfully, while trying to replicate slave with master, it was not a easy task. Probably if I have gone on detail how mysql replication works using GTID that would have helped.
I was getting following error on my slave server.
2014-01-26 22:40:02 16119 [ERROR] Error reading packet from server: Found old binary log without GTIDs while looking for the oldest binary log that contains any GTID that is not in the given gtid set ( server_errno=1236) 2014-01-26 22:40:02 16119 [ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log: ‘Found old binary log without GTIDs while looking for the oldest binary log that contains any GTID that is not in the given gtid set’, Error_code: 1236
You need to know how GTID replication process works to resolve this.
How to set up a new slave
With introduction of GTID on mysql 5.6, Binary Logs and Position are not needed anymore if you are using GTID for replication. Instead we need to know in which GTID is the master and set it on the slave. MySQL keeps two global variables with GTID numbers on it:
gtid_executed: it contains a representation of the set of all transaction logged in the binary log gtid_purged: it contains a representation of the set of all transactions deleted from the binary log
So the process to replicate slave with master is
1. take a backup from the master and store the value of gtid_executed
2. restore the backup on the slave and set gtid_purged with the value of gtid_executed from the master
3. The new mysqldump can do those tasks for us.
Let’s go through example of how to take a backup from the master and restore it on the slave to set up a new replication server.
Restore a slave
Let’s imagine that our slave server is behind the schedule of master server. This is the error we are going to get:
Slave_IO_Running:No
Slave_SQL_Running: Yes
First, we get the GTID_EXECUTED from the master:
master > show global variables like ‘GTID_EXECUTED’;
+—————+——————————————————————————–+
| Variable_name | Value |
+—————+——————————————————————————–+
| gtid_executed | 9a511b7b-7059-11e2-9a24-08002762b8af:1-14 |
+—————+———————————————————————————+
master > show global variables like ‘GTID_EXECUTED’;
+—————+——————————————————————————–+
| Variable_name | Value |
+—————+——————————————————————————–+
| gtid_executed | 9a511b7b-7059-11e2-9a24-08002762b8af:1-14 |
+—————+———————————————————————————+
master > show global variables like ‘GTID_EXECUTED’;
+———————————————————————————————-+
Variable_name | Value |
+—————+—————————————————————————–+
| gtid_executed | 9a511b7b-7059-11e2-9a24-08002762b8af:1-14 |
+—————+—————————————————————————–+
And we set it on the slave:
slave1 > set global GTID_PURGED=”9a511b7b-7059-11e2-9a24-08002762b8af:1-14″;
ERROR 1840 (HY000): GTID_PURGED can only be set when GTID_EXECUTED is empty.
+———————————————————————————————-+
Variable_name | Value |
+—————+—————————————————————————–+
| gtid_executed | 9a511b7b-7059-11e2-9a24-08002762b8af:1-14 |
+—————+—————————————————————————–+
And we set it on the slave:
slave1 > set global GTID_PURGED=”9a511b7b-7059-11e2-9a24-08002762b8af:1-14″;
ERROR 1840 (HY000): GTID_PURGED can only be set when GTID_EXECUTED is empty.
GTID_EXECUTED should be empty before changing GTID_PURGED manually but we can’t change it with SET because is a read only variable. The only way to change it is with reset master (yes, on a slave server):
slave1> reset master;
slave1 > show global variables like ‘GTID_EXECUTED’;
+—————+————————-+
| Variable_name | Value |
+—————+———————–+
| gtid_executed | |
+—————+———————-+
slave1 > set global GTID_PURGED=”9a511b7b-7059-11e2-9a24-08002762b8af:1-14″;
slave1> start slave io_thread;
slave1> show slave status\G
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
slave1> reset master;
slave1 > show global variables like ‘GTID_EXECUTED’;
+—————+————————-+
| Variable_name | Value |
+—————+———————–+
| gtid_executed | |
+—————+———————-+
slave1 > set global GTID_PURGED=”9a511b7b-7059-11e2-9a24-08002762b8af:1-14″;
slave1> start slave io_thread;
slave1> show slave status\G
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Configure SSL cert on Amazon web service(AWS) ELB
Well, normally users create csr for certificate signing request using openssl on x509 format, and you get certificate from Certificate signing authority in x509 format, but AWS ELB it only supports pem encoded key,cert. So in this post I am going to talk about how to change key,cert to pem encoded format and configure ELB with key and cert.
1. Convert your cert to pem encoded format
# openssl x509 -in example.com.cert -out cert.example.com.der -outform DER
# openssl x509 -in cert.example.com.der -inform DER -out cert.example.com.pem -outform PEM
# openssl x509 -in cert.example.com.der -inform DER -out cert.example.com.pem -outform PEM
Your pem encoded cert will be cert.example.com.pem. you can view it’s content using
# cat cert.example.com.pem
2. Convert you key to pem encoded format
# openssl rsa -in example.com.key -out key.example.com.der -outform DER
# openssl rsa -in key.example.com.der -inform DER -out key.example.com.pem -outform PEM
# openssl rsa -in key.example.com.der -inform DER -out key.example.com.pem -outform PEM
Your pem encoded cert will be key.example.com.pem. you can view it’s content using
# cat key.example.com.cert
3. Also convert root chain certificate to pem encoded format
# openssl x509 -in chain.example.com.cert -out chain.example.com.der -outform DER
# openssl x509 -in chain.example.com.der -inform DER -out chain.example.com.pem -outform PEM
# openssl x509 -in chain.example.com.der -inform DER -out chain.example.com.pem -outform PEM
Your pem encoded cert will be chain.example.com.pem. you can view it’s content using
# cat chain.example.com.cert
4. Now login to AWS console and go to Listeners tab on ELB page.
5. Select https on LoadBalancer protocol and click change on SSL Certificate
6. Copy and paste content of pem encoded cert file
7. Follow same step for key and certificate chain cert
8. You can leave instance protocol to http(80) so that you don’t have to configure key and cert on all of your web server. ELB takes of it.
User management using puppet
When you are hosting application on cloud like AWS, you don’t want to use your companies LDAP which contain companies privacy information because of security reason and also you don’t want to run on hardware you don’t own.
So I have to come with new way of user management on cloud and I decided to go with puppet.
We are going to manage passwordless user using key and cert through puppet.
First we will define a virtual user using puppet.
Suppose my module name is accounts.
My init.pp file should look something like following.
# init.pp file
define accounts::virtual ($uid,$realname,$pass,$sshkeytype,$sshkey) {
include accounts::params
include accounts::params
# Pull in values from accounts::params
$homepath = $accounts::params::homepath
$shell = $accounts::params::shell
$homepath = $accounts::params::homepath
$shell = $accounts::params::shell
# Create the user
user { $title:
ensure => ‘present’,
uid => $uid,
gid => $title,
shell => $shell,
home => “${homepath}/${title}”,
comment => $realname,
password => $pass,
managehome => true,
require => Group[$title],
}
user { $title:
ensure => ‘present’,
uid => $uid,
gid => $title,
shell => $shell,
home => “${homepath}/${title}”,
comment => $realname,
password => $pass,
managehome => true,
require => Group[$title],
}
# Create a matching group
group { $title:
gid => $uid,
}
group { $title:
gid => $uid,
}
# Ensure the home directory exists with the right permissions
file { “${homepath}/${title}”:
ensure => directory,
owner => $title,
group => $title,
mode => ’0700′,
require => [ User[$title], Group[$title] ],
}
file { “${homepath}/${title}”:
ensure => directory,
owner => $title,
group => $title,
mode => ’0700′,
require => [ User[$title], Group[$title] ],
}
# Ensure the .ssh directory exists with the right permissions
file { “${homepath}/${title}/.ssh”:
ensure => directory,
owner => $title,
group => $title,
mode => ’0700′,
require => File["${homepath}/${title}"],
}
file { “${homepath}/${title}/.ssh”:
ensure => directory,
owner => $title,
group => $title,
mode => ’0700′,
require => File["${homepath}/${title}"],
}
# Ensure the .bashrc file exists with the right permissions
file { “${homepath}/${title}/.bashrc”:
ensure => present,
mode => 0644,
owner => $title, group => “$title”,
source => “puppet:///modules/accounts/.bashrc”,
require => File["${homepath}/${title}"],
}
file { “${homepath}/${title}/.bashrc”:
ensure => present,
mode => 0644,
owner => $title, group => “$title”,
source => “puppet:///modules/accounts/.bashrc”,
require => File["${homepath}/${title}"],
}
# Ensure the .bash_profile file exists with the right permissions
file { “${homepath}/${title}/.bash_profile”:
ensure => present,
mode => 0644,
owner => $title, group => “$title”,
source => “puppet:///modules/accounts/.bash_profile”,
require => File["${homepath}/${title}"],
}
file { “${homepath}/${title}/.bash_profile”:
ensure => present,
mode => 0644,
owner => $title, group => “$title”,
source => “puppet:///modules/accounts/.bash_profile”,
require => File["${homepath}/${title}"],
}
# Ensure the .bash_logout file exists with the right permissions
file { “${homepath}/${title}/.bash_logout”:
ensure => present,
mode => 0644,
owner => $title, group => “$title”,
source => “puppet:///modules/accounts/.bash_logout”,
require => File["${homepath}/${title}"],
}
file { “${homepath}/${title}/.bash_logout”:
ensure => present,
mode => 0644,
owner => $title, group => “$title”,
source => “puppet:///modules/accounts/.bash_logout”,
require => File["${homepath}/${title}"],
}
# Add user’s SSH key
if ($sshkey != ”) {
ssh_authorized_key {$title:
ensure => present,
name => $title,
user => $title,
type => $sshkeytype,
key => $sshkey,
}
}
}
if ($sshkey != ”) {
ssh_authorized_key {$title:
ensure => present,
name => $title,
user => $title,
type => $sshkeytype,
key => $sshkey,
}
}
}
In above puppet config, .bashrc, .bash_logout and .bash_profile are optional if you don’t want to manage those files using puppet.
Next, if you want to add users to host name agent, then create a file named agent.pp inside manifests directory and add the required user to the file what we call on puppet is realize user.
# agent.pp file
class accounts::agent {
accounts::virtual { ‘keshab’:
uid => 501,
realname => ‘Keshab Budhathoky’,
pass => ”,
sshkeytype => ‘rsa’,
sshkey => ‘AAAAB3NzaC1yc2EAAAABIwAAAQEAukGeSEZJSn5GqN17oEkU95MPa+5KInJNx018LK3eeNDWhaixBJKEp9leFYZjATEMpPODt3L5whgcNuh4sNyRAQm0kEPhjtUC8n/dJK8ZJcfTVDK3gymhvzbe4LZpFOw+6l4AM8uhSzilk8Nq9bDhvmyOTGyR1NfPLjKnP9o9LWfSowRNMlU60SvLukQhqLkcqQX2ojKds+u0jT7LLZyFRjGeju6RQNHIMCX3ZVMHRfsFYIpSJuNttZAY8MBhk93ccgwCALQ0F+icQQ+jgyL3OeQ9Q7FNI/oOzUtJRNktgOZc9IqiBg6pJcIOrEWiS2iGweAQHJSgNIy/Miq234sgdf24tw34dew==’,
}
}
accounts::virtual { ‘keshab’:
uid => 501,
realname => ‘Keshab Budhathoky’,
pass => ”,
sshkeytype => ‘rsa’,
sshkey => ‘AAAAB3NzaC1yc2EAAAABIwAAAQEAukGeSEZJSn5GqN17oEkU95MPa+5KInJNx018LK3eeNDWhaixBJKEp9leFYZjATEMpPODt3L5whgcNuh4sNyRAQm0kEPhjtUC8n/dJK8ZJcfTVDK3gymhvzbe4LZpFOw+6l4AM8uhSzilk8Nq9bDhvmyOTGyR1NfPLjKnP9o9LWfSowRNMlU60SvLukQhqLkcqQX2ojKds+u0jT7LLZyFRjGeju6RQNHIMCX3ZVMHRfsFYIpSJuNttZAY8MBhk93ccgwCALQ0F+icQQ+jgyL3OeQ9Q7FNI/oOzUtJRNktgOZc9IqiBg6pJcIOrEWiS2iGweAQHJSgNIy/Miq234sgdf24tw34dew==’,
}
}
Add accounts class to the node agent.
# nodes.pp file
node “agent.example.com” { class { ‘accounts::agent’: } }
Run puppet agent on client
# puppet agent –test
you can use –noop option for dry run and –verbose for details output on console
Once puppet agent run is successful, you should see user added on /etc/passwd, home directory created at /home/keshab and rsa key at /home/keshab/.ssh/authorized_keys.
Once all the above process is successful, you should be able to login from any other host without password if you have private key with you.
Subscribe to:
Posts (Atom)