So this error is pretty confusing to understand at first glance,
err: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class ABC at /etc/puppet/manifests/nodes.pp:14 on node XYZ
Basically it is kind of complaining on missing of specific modules ABC. There are 2 things could be checked here,
1. Make sure you have define the modulepath parameter per the document
http://docs.puppetlabs.com/puppet/2.7/reference/modules_fundamentals.html
2. Make sure you have properly define the class name within ABC/manifests/init.pp (well it happens to me couples of time and it turns out there was a typo within the init.pp)
So below init.pp is deemed to see the declared class not found error.
# cat /etc/puppet/modules/ABC/manifests/init.pp
class ABCtypo {
exec { "blah ....":
}
}
I am a Linux Administrator in Hong Kong, specialized in RHEL administration as well as IAAS cloud deployment. :-)
2012年4月11日 星期三
2012年4月10日 星期二
How to add a puppet client to puppet master.
Before a new puppet client be allowed to fetch manifest from puppet server, the client will have to be signed and below command would do the job
[root@puppetclient ~]# puppet agent --server puppetmaster --test --waitforcert 30
The above command will execute puppet as agent mode and connect to server puppetmaster (** remote server name in here have to match the remote server hostname or otherwise client agent will come up with error "err: Could not retrieve catalog from remote server: hostname was not match with the server certificate"). Option "--test" means the agent will be executed in test mode and then --waitforcert 30 means the puppet client will wait for 30 seconds for server to sign up the certificate. If 30 seconds passed and the client certificate is still not signed, the client agent will stop and exit.
So on server, below command would list out the certs pending for approval
root@puppetmaster:~# puppetca --list
puppetclient
(CC:2B:2B:9D:4A:EF:3F:15:EF:60:C7:73:C9:18:FF:D1)
root@puppetmaster:~# puppetca --sign puppetclient
notice: Signed certificate request for puppetclient
notice: Removing file Puppet::SSL::CertificateRequest puppetclient at '/var/lib/puppet/ssl/ca/requests/puppetclient'
[root@puppetclient ~]# puppet agent --server puppetmaster --test --waitforcert 30
The above command will execute puppet as agent mode and connect to server puppetmaster (** remote server name in here have to match the remote server hostname or otherwise client agent will come up with error "err: Could not retrieve catalog from remote server: hostname was not match with the server certificate"). Option "--test" means the agent will be executed in test mode and then --waitforcert 30 means the puppet client will wait for 30 seconds for server to sign up the certificate. If 30 seconds passed and the client certificate is still not signed, the client agent will stop and exit.
So on server, below command would list out the certs pending for approval
root@puppetmaster:~# puppetca --list
puppetclient
(CC:2B:2B:9D:4A:EF:3F:15:EF:60:C7:73:C9:18:FF:D1)
root@puppetmaster:~# puppetca --sign puppetclient
notice: Signed certificate request for puppetclient
notice: Removing file Puppet::SSL::CertificateRequest puppetclient at '/var/lib/puppet/ssl/ca/requests/puppetclient'
AWS Storage Gateway, can it be seated behind NAT?
Recently I was testing the AWS Storage Gateway. AWS storage gateway is a product combing AWS console frontend and a ESXi-based Linux VM (we call it storage gateway VM). The AWS console is responsible to handle user instruction on the storage gateway and pass the instruction to storage VM (e.g. create iscsi target on Storage VM, take or restore snapshot ... etc) while the ESXi-based storage VM is the actual host handling the instruction and storage thing.
From observation, there will be 2 ports be listening on the storage gateway VM, they are TCP port 80 and 3260. Port 80 is actually a java instance which is responsible to serve API call (user submit the request via AWS console or AWS API, and then AWS pass the request to the API handler on port 80 of Storage VM). Port 3260 is the ISCSI target which is responsible to handle all ISCSI request..
So, i was asked if it is possible to run this storage gateway VM behind NAT, i.e. sitting the VM in private network. With this subjective, there are 2 possible scenarios,
- one is sitting the VM behind NAT but without any port mapping on port 80 and 3260
- while another scenario is putting the VM behind NAT but with port mapping enabled (i.e. exposing and mapping the port 80 and 3260 on wan outside to port 80 and 3260 on the VM with private IP address).
Unfortunately both scenario wouldn't work. For first scenario, though the storage VM could be activated without problem, AWS just not able to communicate with the storage gateway VM therefore all instruction from users wouldn't be passed over to storage gateway VM at all. No ISCSI target could be created, no snapshot could be created or restored as all instructions are pended and timed out. For second scenario, we could activate the storage gateway VM, create volume, create and restore snapshot but the ISCSI target is just not working at all due to the ISCSI implementation restriction. ISCSI initiator (or ISCSI guest) could discover the ISCSI target via port 3260 but it just couldn't login to the resources.
To explain why it wont work behind NAT, we will have to go through the process in connecting or mapping an ISCSI target.
The ISCSI connection establishment process is actually a two-step process. The first step would be the iscsi initiator to scan and discover iscsi remote resource on the remote iscsi target. During the test, we could successfully perform this step as we do see the iscsi target during iscsi discovery. However, on the 2nd step when we tried to login into the ISCSI resource and then we see issue. The situation is that, iscsi resource is presented with a combination of on-host ip and iqn, e.g "10.1.1.1, iqn-name". The ip address here is the ip on the host, therefore that is NATTed private address. Once iscsi initiator(the guest VM) try to map the remote resource, due to implementation restriction it will connect to the ip address being presented, therefore the private ip address. As the IP address presented is private, VM initiator from public network wouldn't be able to talk to that IP address and connecting to the target would lead to request timeout.
The only possible workaround we could think of right now is to create a VPN tunnel between Initiator and the storage gateway VM behind NAT. In this way the AWS storage VM's iSCSI targets can be seen as they would be on the same LAN segment. However with this approach it would definitely add extra overhead on the ISCSI's I/O performance.
BTW, this storage appliance is designed for access from on-premise device (i.e., natively they should be on the same network segemtn) and which means the ISCSI traffic should not really need to get through public network. In situation like this the appliance should be good to use.
From observation, there will be 2 ports be listening on the storage gateway VM, they are TCP port 80 and 3260. Port 80 is actually a java instance which is responsible to serve API call (user submit the request via AWS console or AWS API, and then AWS pass the request to the API handler on port 80 of Storage VM). Port 3260 is the ISCSI target which is responsible to handle all ISCSI request..
So, i was asked if it is possible to run this storage gateway VM behind NAT, i.e. sitting the VM in private network. With this subjective, there are 2 possible scenarios,
- one is sitting the VM behind NAT but without any port mapping on port 80 and 3260
- while another scenario is putting the VM behind NAT but with port mapping enabled (i.e. exposing and mapping the port 80 and 3260 on wan outside to port 80 and 3260 on the VM with private IP address).
Unfortunately both scenario wouldn't work. For first scenario, though the storage VM could be activated without problem, AWS just not able to communicate with the storage gateway VM therefore all instruction from users wouldn't be passed over to storage gateway VM at all. No ISCSI target could be created, no snapshot could be created or restored as all instructions are pended and timed out. For second scenario, we could activate the storage gateway VM, create volume, create and restore snapshot but the ISCSI target is just not working at all due to the ISCSI implementation restriction. ISCSI initiator (or ISCSI guest) could discover the ISCSI target via port 3260 but it just couldn't login to the resources.
To explain why it wont work behind NAT, we will have to go through the process in connecting or mapping an ISCSI target.
The ISCSI connection establishment process is actually a two-step process. The first step would be the iscsi initiator to scan and discover iscsi remote resource on the remote iscsi target. During the test, we could successfully perform this step as we do see the iscsi target during iscsi discovery. However, on the 2nd step when we tried to login into the ISCSI resource and then we see issue. The situation is that, iscsi resource is presented with a combination of on-host ip and iqn, e.g "10.1.1.1, iqn-name". The ip address here is the ip on the host, therefore that is NATTed private address. Once iscsi initiator(the guest VM) try to map the remote resource, due to implementation restriction it will connect to the ip address being presented, therefore the private ip address. As the IP address presented is private, VM initiator from public network wouldn't be able to talk to that IP address and connecting to the target would lead to request timeout.
The only possible workaround we could think of right now is to create a VPN tunnel between Initiator and the storage gateway VM behind NAT. In this way the AWS storage VM's iSCSI targets can be seen as they would be on the same LAN segment. However with this approach it would definitely add extra overhead on the ISCSI's I/O performance.
BTW, this storage appliance is designed for access from on-premise device (i.e., natively they should be on the same network segemtn) and which means the ISCSI traffic should not really need to get through public network. In situation like this the appliance should be good to use.
2012年4月9日 星期一
err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
So I am seeing below captioned error when I am trying to connect to a puppet master.
err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
The reason for above error is because the agent node is trying to connect to a different master and then it failed to validate the certificate. To solve the problem, we have to execute below command and retry.
find /var/lib/puppet -type f | xargs rm -rf
err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
The reason for above error is because the agent node is trying to connect to a different master and then it failed to validate the certificate. To solve the problem, we have to execute below command and retry.
find /var/lib/puppet -type f | xargs rm -rf
Generate puppet server certificate
So I am getting error "err: Could not call sign: Could not find certificate request for puppetmaster" when I try to startup puppet server. I have to generate a SSL cert for the puppet server before going on.
root@puppetmaster:/etc/puppet# puppet cert generate puppetmaster
notice: puppetmaster has a waiting certificate request
notice: Signed certificate request for puppetmaster
notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem'
notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem'
root@puppetmaster:/etc/puppet# puppet cert generate puppetmaster
notice: puppetmaster has a waiting certificate request
notice: Signed certificate request for puppetmaster
notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem'
notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem'
2012年4月8日 星期日
Ubuntu, E: Unable to locate package
So I am trying to install packages on a newly installed Ubuntu box from aptitude but somehow it failed to locate the package.
# apt-get install gcc
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package gcc
# aptitude search gcc
#
I am pretty sure the box could connect to the internet so it is quite weird it failed to locate the package.
In fact the issue is pretty straight forward, the local aptitude database didn't contain the software entries and this require an update of the database.
# apt-get update
Ign http://us.archive.ubuntu.com precise InRelease
Ign http://us.archive.ubuntu.com precise-updates InRelease
Ign http://us.archive.ubuntu.com precise-backports InRelease
Ign http://security.ubuntu.com precise-security InRelease
Get:1 http://us.archive.ubuntu.com precise Release.gpg [198 B]
Get:2 http://us.archive.ubuntu.com precise-updates Release.gpg [198 B]
Get:3 http://security.ubuntu.com precise-security Release.gpg [198 B]
...
...
Get:87 http://us.archive.ubuntu.com precise-backports/restricted Translation-en [14 B]
Get:88 http://us.archive.ubuntu.com precise-backports/universe Translation-en [8,555 B]
Fetched 24.8 MB in 32s (769 kB/s)
Reading package lists... Done
Now the issue is resolved. :-)
# apt-get install gcc
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
binutils cpp cpp-4.6 gcc-4.6 libc-dev-bin libc6-dev libgomp1 libmpc2 libmpfr4 libquadmath0 linux-libc-dev manpages-dev
Suggested packages:
...
...
# apt-get install gcc
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package gcc
# aptitude search gcc
#
I am pretty sure the box could connect to the internet so it is quite weird it failed to locate the package.
In fact the issue is pretty straight forward, the local aptitude database didn't contain the software entries and this require an update of the database.
# apt-get update
Ign http://us.archive.ubuntu.com precise InRelease
Ign http://us.archive.ubuntu.com precise-updates InRelease
Ign http://us.archive.ubuntu.com precise-backports InRelease
Ign http://security.ubuntu.com precise-security InRelease
Get:1 http://us.archive.ubuntu.com precise Release.gpg [198 B]
Get:2 http://us.archive.ubuntu.com precise-updates Release.gpg [198 B]
Get:3 http://security.ubuntu.com precise-security Release.gpg [198 B]
...
...
Get:87 http://us.archive.ubuntu.com precise-backports/restricted Translation-en [14 B]
Get:88 http://us.archive.ubuntu.com precise-backports/universe Translation-en [8,555 B]
Fetched 24.8 MB in 32s (769 kB/s)
Reading package lists... Done
Now the issue is resolved. :-)
# apt-get install gcc
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
binutils cpp cpp-4.6 gcc-4.6 libc-dev-bin libc6-dev libgomp1 libmpc2 libmpfr4 libquadmath0 linux-libc-dev manpages-dev
Suggested packages:
...
...
2012年4月6日 星期五
Openstack Keystone (diablo): Got: ImportError('No module named MySQLdb',)
If one is seeing this on a Ubuntu / Debian box during start of keystone after migrating DB from sqlite to MySQL, simply installing the associated python libraries would fix the issue.
root@keystone:~/openstack-keystone-79a9fde# ERROR: Unable to load keystone-legacy-auth from configuration file /etc/keystone/keystone.conf.
Got: ImportError('No module named MySQLdb',)
root@keystone:~/openstack-keystone-79a9fde# apt-get install python-mysqldb
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
python-support
Suggested packages:
python-egenix-mxdatetime python-mysqldb-dbg
The following NEW packages will be installed:
python-mysqldb python-support
0 upgraded, 2 newly installed, 0 to remove and 58 not upgraded.
Need to get 109 kB of archives.
After this operation, 578 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 http://us.archive.ubuntu.com/ubuntu/ oneiric/main python-support all 1.0.13ubuntu1 [26.6 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu/ oneiric/main python-mysqldb amd64 1.2.3-0ubuntu1 [82.5 kB]
Fetched 109 kB in 0s (184 kB/s)
Selecting previously deselected package python-support.
(Reading database ... 55750 files and directories currently installed.)
Unpacking python-support (from .../python-support_1.0.13ubuntu1_all.deb) ...
Selecting previously deselected package python-mysqldb.
Unpacking python-mysqldb (from .../python-mysqldb_1.2.3-0ubuntu1_amd64.deb) ...
Processing triggers for man-db ...
Setting up python-support (1.0.13ubuntu1) ...
Setting up python-mysqldb (1.2.3-0ubuntu1) ...
Processing triggers for python-support ...
root@keystone:~/openstack-keystone-79a9fde# keystone &
[1] 13882
root@keystone:~/openstack-keystone-79a9fde# Starting the RAX-KEY extension
Starting the Legacy Authentication component
Service API listening on 0.0.0.0:5000
Admin API listening on 0.0.0.0:35357
root@keystone:~/openstack-keystone-79a9fde# ERROR: Unable to load keystone-legacy-auth from configuration file /etc/keystone/keystone.conf.
Got: ImportError('No module named MySQLdb',)
root@keystone:~/openstack-keystone-79a9fde# apt-get install python-mysqldb
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
python-support
Suggested packages:
python-egenix-mxdatetime python-mysqldb-dbg
The following NEW packages will be installed:
python-mysqldb python-support
0 upgraded, 2 newly installed, 0 to remove and 58 not upgraded.
Need to get 109 kB of archives.
After this operation, 578 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 http://us.archive.ubuntu.com/ubuntu/ oneiric/main python-support all 1.0.13ubuntu1 [26.6 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu/ oneiric/main python-mysqldb amd64 1.2.3-0ubuntu1 [82.5 kB]
Fetched 109 kB in 0s (184 kB/s)
Selecting previously deselected package python-support.
(Reading database ... 55750 files and directories currently installed.)
Unpacking python-support (from .../python-support_1.0.13ubuntu1_all.deb) ...
Selecting previously deselected package python-mysqldb.
Unpacking python-mysqldb (from .../python-mysqldb_1.2.3-0ubuntu1_amd64.deb) ...
Processing triggers for man-db ...
Setting up python-support (1.0.13ubuntu1) ...
Setting up python-mysqldb (1.2.3-0ubuntu1) ...
Processing triggers for python-support ...
root@keystone:~/openstack-keystone-79a9fde# keystone &
[1] 13882
root@keystone:~/openstack-keystone-79a9fde# Starting the RAX-KEY extension
Starting the Legacy Authentication component
Service API listening on 0.0.0.0:5000
Admin API listening on 0.0.0.0:35357
2012年4月5日 星期四
Disk benchmarking by bonnie++ in Linux
Bonnie++ would possibly be come with your install, you could either get it by "yum install bonnie++" (Fedora/CentOS) or "apt-get install bonnie++" (Ubuntu/Debian) or install it via source (link here)
So once bonnie++ is installed you could start the test by doing.
root@host:/tmp# bonnie++ -m test-box -u root -x 3 -d /tmp/ -s 1024 -r 512 | bon_csv2html > result.html
Using uid:0, gid:0.
Writing a byte at a time...Can't process: format_version,bonnie_version,name,file_size,io_chunk_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,max_size,min_size,num_dirs,file_chunk_size,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu,putc_latency,put_block_latency,rewrite_latency,getc_latency,get_block_latency,seeks_latency,seq_create_latency,seq_stat_latency,seq_del_latency,ran_create_latency,ran_stat_latency,ran_del_latency
done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
-m test-box : -m is the option to control name presented in report.
-u : root means you want it be executed with root privilege. If you want to use something else, just make sure that particular user have the write privileges to the directory specified in option -d.
-d : means the directory you want to be running on, which means the file system have to be mounted prior to the run.
-x 3 : means it will be executed 3 times so that we can pick a fair value.
bon_csv2html > result.html : Bonnie++ would only generate result in CSV format which is no good for presentation purpose. bon_csv2html do the dirty job for you to convert CSV to HTML. We add the redirection here to save the output to a static file for future retrieval. If you prefer TXT report instead of HTML report, you would want to go with bon_csv2txt.
So once bonnie++ is installed you could start the test by doing.
root@host:/tmp# bonnie++ -m test-box -u root -x 3 -d /tmp/ -s 1024 -r 512 | bon_csv2html > result.html
Using uid:0, gid:0.
Writing a byte at a time...Can't process: format_version,bonnie_version,name,file_size,io_chunk_size,putc,putc_cpu,put_block,put_block_cpu,rewrite,rewrite_cpu,getc,getc_cpu,get_block,get_block_cpu,seeks,seeks_cpu,num_files,max_size,min_size,num_dirs,file_chunk_size,seq_create,seq_create_cpu,seq_stat,seq_stat_cpu,seq_del,seq_del_cpu,ran_create,ran_create_cpu,ran_stat,ran_stat_cpu,ran_del,ran_del_cpu,putc_latency,put_block_latency,rewrite_latency,getc_latency,get_block_latency,seeks_latency,seq_create_latency,seq_stat_latency,seq_del_latency,ran_create_latency,ran_stat_latency,ran_del_latency
done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
-m test-box : -m is the option to control name presented in report.
-u : root means you want it be executed with root privilege. If you want to use something else, just make sure that particular user have the write privileges to the directory specified in option -d.
-d : means the directory you want to be running on, which means the file system have to be mounted prior to the run.
-x 3 : means it will be executed 3 times so that we can pick a fair value.
bon_csv2html > result.html : Bonnie++ would only generate result in CSV format which is no good for presentation purpose. bon_csv2html do the dirty job for you to convert CSV to HTML. We add the redirection here to save the output to a static file for future retrieval. If you prefer TXT report instead of HTML report, you would want to go with bon_csv2txt.
Cyberduck authentication failure against Openstack Swift with swauth
During testing of Openstack Swift, I found that there isn't much GUI clients out there support Openstack Swift and the best option I could go with is Cyberduck.
Downloading it and then follow by installation on my windows test VM, everything appears to be good and smooth, except it keep saying "Login failed."
So I am pretty sure my username (account:username) and API Key (basically the password of the user) is correct however I am still not able to get in. I did some test with s3curl.pl and it is authenticating successfully without any issue.
Tried googling but there isnt article explaining this but somehow I found this article from cyberduck trac page. It looks like I have to modify some parameter on Cyberduck to allow it to work with Swift with swauth authentication module.
So as per the article suggested, I added below line to user.config and restarted Cyberduck. Now authentication seems to be working.
<setting name="cf.authentication.context" value="/auth/v1.0" />
Downloading it and then follow by installation on my windows test VM, everything appears to be good and smooth, except it keep saying "Login failed."
So I am pretty sure my username (account:username) and API Key (basically the password of the user) is correct however I am still not able to get in. I did some test with s3curl.pl and it is authenticating successfully without any issue.
Tried googling but there isnt article explaining this but somehow I found this article from cyberduck trac page. It looks like I have to modify some parameter on Cyberduck to allow it to work with Swift with swauth authentication module.
So as per the article suggested, I added below line to user.config and restarted Cyberduck. Now authentication seems to be working.
<setting name="cf.authentication.context" value="/auth/v1.0" />
2012年4月4日 星期三
AWS storage gateway: WORKING STORAGE NOT CONFIGURED
As continuing the test on AWS Storage gateway, I found that there is an implicit requirement of the AWS storage VM, i.e. the VM have to be assigned with a publicly accessible IP address, or at least the IP address could be reached by AWS network.
The logic behind is that when someone trying to manage the AWS storage VM via AWS web console, the instruction will have to be passed over to the VM (possibly via port 80 of the AWS VM, but I didnt confirm it yet) via public network. In any case AWS failed to reach the VM, it will not able to proceed with the instruction.
The above idea was tested against an internal VM I was playing with yesterday. The VM is sit on private network (e..g 192.168.x.x) with outgoing NAT enable but not incoming NAT enable. I could successfully proceed with the VM activation but no volumes could be added from AWS console. The newly added volumes keep showing "WORKING STORAGE NOT CONFIGURED" on AWS console which basically means that it is not creating at all. Usually, creating a new volume should not take too long at all.
Here is the screenshot though,
Apart from volumes creation failure, I also tried adding new virtual disk to the storage VM and see if AWS could see the new virtual disk. However, the answer is no. So what I could pretty sure here is that AWS will have to talk to VM and it just wont be able to put the Storage VM on an internal network segment which is not accessible from public.
The logic behind is that when someone trying to manage the AWS storage VM via AWS web console, the instruction will have to be passed over to the VM (possibly via port 80 of the AWS VM, but I didnt confirm it yet) via public network. In any case AWS failed to reach the VM, it will not able to proceed with the instruction.
The above idea was tested against an internal VM I was playing with yesterday. The VM is sit on private network (e..g 192.168.x.x) with outgoing NAT enable but not incoming NAT enable. I could successfully proceed with the VM activation but no volumes could be added from AWS console. The newly added volumes keep showing "WORKING STORAGE NOT CONFIGURED" on AWS console which basically means that it is not creating at all. Usually, creating a new volume should not take too long at all.
Here is the screenshot though,
Apart from volumes creation failure, I also tried adding new virtual disk to the storage VM and see if AWS could see the new virtual disk. However, the answer is no. So what I could pretty sure here is that AWS will have to talk to VM and it just wont be able to put the Storage VM on an internal network segment which is not accessible from public.
2012年4月3日 星期二
"REST API Design Rulebook" by Mark Masse; O'Reilly Media
I am not a web application developer so this book is not really a reference book to me. The reason why I read this book is because the use of REST API is getting ubiquitous, no matter it is on web application areas or even system administration areas. So, I would like to check out what REST API is and more importantly, its best practise. This book does give me a lot of insight into REST API and it is so well-written and easy to understand for REST newbie like me.
The books contain 7 chapters, approximately 90 pages something, which is not a heavy book. I read it during commute and eventually it took me like 3 or 4 days to finish. For the meat of the book, I will say they are on chapter 2 to 6 which is totally 5 chapters there. They covers lots of things that would easily be ignored like the best practice to design URL of API, how does the interaction between REST and HTTP, how can one handle the resources reprenstation in a better way .... etc. All these are pretty insightful but easily ignored. Apart from these, I do really love the arrangement of the paragraph and reference materials, they are well organized and easily accessible. Overall this book is really good for those who have some computing background and really want to know more about REST API.
2012年4月2日 星期一
Openstack Swift with swauth: Account creation failed: 400 Bad Request, User creation failed: 400 Bad Request
Getting another issue when creating an account via swauth,
root@proxy:~/s3-curl# swauth-add-user -A https://127.0.0.1:8080/auth/v1 -K swauthkey -a system testuser testpassword
Account creation failed: 400 Bad Request
User creation failed: 400 Bad Request
So, again, I tried to google and see what can I do with this error, unfortunately seems not much could be done until I notice that it is the Admin URL causing the issue again. So I changed the Admin URI from /auth/v1 to /auth/ and see how it goes
root@proxy:~/s3-curl# swauth-add-user -A https://127.0.0.1:8080/auth/ -K swauthkey -a system testuser testpassword
root@proxy:~/s3-curl# swauth-list -A https://127.0.0.1:8080/auth/ -K swauthkey system
{"services": {"storage": {"default": "localhost", "localhost": "https://127.0.0.1:8080/v1/AUTH_e4afb2e7-d77f-4c7a-a431-01490ba5e982"}}, "account_id": "AUTH_e4afb2e7-d77f-4c7a-a431-01490ba5e982", "users": [{"name": "testuser"}]}
So it looks the issue is fixed.
root@proxy:~/s3-curl# swauth-add-user -A https://127.0.0.1:8080/auth/v1 -K swauthkey -a system testuser testpassword
Account creation failed: 400 Bad Request
User creation failed: 400 Bad Request
So, again, I tried to google and see what can I do with this error, unfortunately seems not much could be done until I notice that it is the Admin URL causing the issue again. So I changed the Admin URI from /auth/v1 to /auth/ and see how it goes
root@proxy:~/s3-curl# swauth-add-user -A https://127.0.0.1:8080/auth/ -K swauthkey -a system testuser testpassword
root@proxy:~/s3-curl# swauth-list -A https://127.0.0.1:8080/auth/ -K swauthkey system
{"services": {"storage": {"default": "localhost", "localhost": "https://127.0.0.1:8080/v1/AUTH_e4afb2e7-d77f-4c7a-a431-01490ba5e982"}}, "account_id": "AUTH_e4afb2e7-d77f-4c7a-a431-01490ba5e982", "users": [{"name": "testuser"}]}
So it looks the issue is fixed.
2012年4月1日 星期日
Openstack Swift with swauth, getting "Account creation failed: 500 Server Error" when adding account
During testing of Swift with swauth, I was trying to add account to swauth database with swauth-add-account but it was failed out with "Account creation failed: 500 Server Error"
root@proxy:~# swauth-add-account -A https://1.2.3.4:8080/auth -K swauthkey testgp
Account creation failed: 500 Server Error
With further checking, it looks like "allow_account_management = true" have to be added under [app:proxy-server] tag of proxy-server.conf like this.
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
Once above line is added to configuration file, followed by proxy restart and that should fix the problem.
root@proxy:~s3-curl# swauth-add-account -A https://1.2.3.4:8080/auth -K swauthkey testgp
root@proxy:~s3-curl# swauth-list -A https://1.2.3.4:8080/auth -K swauthkey
{"accounts": [{"name": "system"}, {"name": "testgp"}]}
root@proxy:~# swauth-add-account -A https://1.2.3.4:8080/auth -K swauthkey testgp
Account creation failed: 500 Server Error
With further checking, it looks like "allow_account_management = true" have to be added under [app:proxy-server] tag of proxy-server.conf like this.
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
Once above line is added to configuration file, followed by proxy restart and that should fix the problem.
root@proxy:~s3-curl# swauth-add-account -A https://1.2.3.4:8080/auth -K swauthkey testgp
root@proxy:~s3-curl# swauth-list -A https://1.2.3.4:8080/auth -K swauthkey
{"accounts": [{"name": "system"}, {"name": "testgp"}]}
ForwardX11 option in ssh_config
To allow ssh client to enable X11 Forwarding always, one can add option "ForwardX11 yes" to /etc/ssh/ssh_config or its own ssh_config under home.
# grep "ForwardX11 yes" /etc/ssh/ssh_config
ForwardX11 yes
One may also know that we can tell the ssh client to turn on X11 forwarding from ssh cli via -X or -Y option.
e.g # ssh -X user@1.2.3.4
So, if we have "ForwardX11 yes" be set on ssh_config, all ssh sessions initiated by your machine will be have option "-X" be enabled. However, in some situation you may want to disable this feature and you have no access to /etc/ssh/ssh_config, you can
# ssh -o ForwardX11=no user@1.2.3.4
And then it would stop doing X11 forwarding for that ssh session.
# grep "ForwardX11 yes" /etc/ssh/ssh_config
ForwardX11 yes
One may also know that we can tell the ssh client to turn on X11 forwarding from ssh cli via -X or -Y option.
e.g # ssh -X user@1.2.3.4
So, if we have "ForwardX11 yes" be set on ssh_config, all ssh sessions initiated by your machine will be have option "-X" be enabled. However, in some situation you may want to disable this feature and you have no access to /etc/ssh/ssh_config, you can
# ssh -o ForwardX11=no user@1.2.3.4
And then it would stop doing X11 forwarding for that ssh session.
訂閱:
文章 (Atom)