I recently discovered an XSS vulnerability in a blog that uses Wootheme Canvas with Yoast breadcrumbs.
in wp-content/themes/canvas/includes/theme-plugins.php, line 219, the search widget does not escape html entities, resulting in potential code injection. e.g. search for:
>"></strong><script>alert('injection');</script>
In order to fix this, the line should read as:
$output .= bold_or_not($opt['searchprefix'].' "'.htmlentities(html_entity_decode(get_search_query())).'"');
Friday, July 9, 2010
Wednesday, June 30, 2010
making the most of bash
To the modern, Windows-accustomed, ungeeky person, those who who work in shells are often thought to hang out with folks like Neo and Morpheus. The truth is: we don't hang out with them, we just `talk` across terminals or occasionally say hi on multiplayer notepad (a.k.a IRC). And it's during these encounters that we sometimes share little nuances that make working in console just a little easier. I'm going to let you in on a few that could encourage you to don dark glasses and start muttering phrases such as 'Know Thyself';
This assumes you understand stuff like cd, cat, redirection, piping, ls and so on
[ -f /tmp/somefile ] && echo '/tmp/somefile exists"
for item in /tmp/*; do echo "Found the following in /tmp: $item"; done
while [ 1 ]; do echo "Infinite loop"; sleep 10; done
Here's a more complex one, using a few:
for db in `mysql -e 'show databases' -B | sed -e "1d"`; do
for table in `mysql $db -e "show tables" -B | sed -e "1d"`; do
if mysql $db -B -e "show create table $table" | egrep -qi 'ENGINE=innodb'; then
echo "$table in $db is InnoDB engine"
else
echo "$table in $db is not InnoDB"
fi
done
done
MYVAR=${SOMEVAR//search/replace} - MYVAR will be set to $SOMEVAR with all occurences of 'search' set to 'replace'.
There's plenty more, so comments / suggestions are welcome (and I'll add some more in here as I remember).
This assumes you understand stuff like cd, cat, redirection, piping, ls and so on
- screen - create a virtual console that you can detach from. Use ctrl + a, ctrl + d to detach. 'screen -x' will reattach (and allow multiple attachments). Try 'screen -d -m sleep 10' to run the command (sleep 10) in a screen in detached mode. 'screen -x' will attach.
- wall / talk / mesg / write - wall broadcasts messages to all consoles, for system notices. mesg controls access to your terminal. talk & write is for sending messages to other users.
- du -sh - Count the size of a directory. the -h outputs in human readable format and -s just shows the total, not all files.
- df -h - shows disk usage
- watch - allows you to repeatedly execute a command. e.g. to watch network connections (10s refresh): watch -n 10 netstat -nt
- which - gives full path to a command in $PATH.
- CTRL + R and ! - these are bash history functions. '!somestr' will execute that most recent command that matches 'somestr'. e.g. if the most recent command starting with 'ech' was 'echo hello', then '!ech' would re-execute it. CTRL + R allows bash history searching
- `command output substitution` - backquotes (the one under the tilde~) substitute the output of a command. e.g: echo "Hi, the date is `date`" would stick the date in there. 'vi `which myscript`' would edit '/full/path/to/myscript' if it is in your PATH.
- jobs, fg, bg, & and CTRL + Z - job control. You can run a process in the background by adding a & to the end of it (e.g. 'sleep 60 &'). in order to bring it to the foreground, you can get the job id using `jobs` and then 'fg
'. It is now in the foreground. a CTRL + Z signal will send it back to the background, in stopped mode. 'bg ' will make it run in the background again. note that these jobs are attached to your current terminal session, so a job running in the backgroun will end if you exit. - looping and conditions - this can really make life easy for a sys admin, here's some examples:
[ -f /tmp/somefile ] && echo '/tmp/somefile exists"
for item in /tmp/*; do echo "Found the following in /tmp: $item"; done
while [ 1 ]; do echo "Infinite loop"; sleep 10; done
Here's a more complex one, using a few:
for db in `mysql -e 'show databases' -B | sed -e "1d"`; do
for table in `mysql $db -e "show tables" -B | sed -e "1d"`; do
if mysql $db -B -e "show create table $table" | egrep -qi 'ENGINE=innodb'; then
echo "$table in $db is InnoDB engine"
else
echo "$table in $db is not InnoDB"
fi
done
done
- bash expansion (nice for writing scripts) - bash has some nice features like conditionals and search and replace, that can use less piping to awk / sed (lower on resources). Some examples:
MYVAR=${SOMEVAR//search/replace} - MYVAR will be set to $SOMEVAR with all occurences of 'search' set to 'replace'.
- in-file search and replace - this can be done with perl or sed, and can match on regex. for example, if you want to change all occurences of INSERTs to REPLACEs within a sql file: perl -i.bak -pe 's/INSERT INTO/REPLACE INTO/g' /path/to/some.sql
There's plenty more, so comments / suggestions are welcome (and I'll add some more in here as I remember).
Tuesday, June 15, 2010
Friday, May 14, 2010
The Pizza Cloud
Friday night - it's been a long week, and time to head home. Friday nights are takeout night, so its a stop to pick up some quick 'n easy Italian cuisine on the way. Quick? Doesn't seem like it. For no apparent reason, everyone decided that Pizza would be Perfect. Phones ringing like crazy, staff on the verge of cracking. Angry customers walking out. Angrier customers cancelling orders on the phone. 2 hour delays. Not good at all.
As I stood staring into the mesmerising flames of the wood oven, I figured these guys should expand. Then again, I've never seen them this busy. They need dynamic auto-scaling. Wouldn't it be great if everything was as easy as 1, 2, IT?
Amazon launched their cloud service, http://aws.amazon.com about 2 or 3 years ago, being 1 of the first to offer the product. Since then, many major players have hopped on. There's a comprehensive list here.
Moving from traditional hosting services to cloud computing can be quite cost-effective for many businesses. The stream of pizza orders being cancelled was not business lost for the night. It was business lost. Bad service = non-returning customers. 1 good marketing operation might find new orders coming in, as you notice your Analytics trending upwards. It's ok - your site is running on decent HA infrastructure, with a couple of servers. Its a really big pizza oven, and you can deal with the orders. You expect it, its Friday night!
People are saying "Wow. Really valuable site. With anchovies please" - it starts to hit the viral networks. Tweeps are tweeting and its a real Buzz. "Do I have enough wood in my pizza oven" you think to yourself as CPU gets consumed by hungry database queries. The oven almost seems to be shrinking as available memory is gobbled by the overworked application servers.
Right now would be a good time to 'click, click' and bring on 40% more servers. It's really that easy. AWS Management console. Launch new instances. 4 more servers online. 10 more servers online. 20 more servers online. Bring it on. It's not only the physical resources that add value. The providers are offering a great portfolio of products, such as monitoring services (CloudWatch), Auto Scaling that automatically grows infrastructure as load increases and shrinks it as it becomes unnecessary, and a multitude of web appliances built by specialists that you can launch. As an example, we have recently started investigating aiCache as a scalable frontend SSL offloader. They have AMI's available to launch at your disposal and have put together a good solution - http://aicache.com/blog/last-man-standing-cnbc-soars-on-aicache-dyn/).
We recently had a successful launch at Finovate in San Francisco, and fearing the TSPOP (Too Small Pizza Oven Problem), we decided to migrate our infrastructure to AWS. Scale-up. Scale-down. No problem.
Read more about the launch at http://www.jemstep.com/blog/2010/05/jemstep-has-successful-demo-at-finovatespring-2010/
*Finally* - the pizza's are ready. They are hot but it took way too long. Next time I'll rather order from jemstep.com.
As I stood staring into the mesmerising flames of the wood oven, I figured these guys should expand. Then again, I've never seen them this busy. They need dynamic auto-scaling. Wouldn't it be great if everything was as easy as 1, 2, IT?
Amazon launched their cloud service, http://aws.amazon.com about 2 or 3 years ago, being 1 of the first to offer the product. Since then, many major players have hopped on. There's a comprehensive list here.
Moving from traditional hosting services to cloud computing can be quite cost-effective for many businesses. The stream of pizza orders being cancelled was not business lost for the night. It was business lost. Bad service = non-returning customers. 1 good marketing operation might find new orders coming in, as you notice your Analytics trending upwards. It's ok - your site is running on decent HA infrastructure, with a couple of servers. Its a really big pizza oven, and you can deal with the orders. You expect it, its Friday night!
People are saying "Wow. Really valuable site. With anchovies please" - it starts to hit the viral networks. Tweeps are tweeting and its a real Buzz. "Do I have enough wood in my pizza oven" you think to yourself as CPU gets consumed by hungry database queries. The oven almost seems to be shrinking as available memory is gobbled by the overworked application servers.
Right now would be a good time to 'click, click' and bring on 40% more servers. It's really that easy. AWS Management console. Launch new instances. 4 more servers online. 10 more servers online. 20 more servers online. Bring it on. It's not only the physical resources that add value. The providers are offering a great portfolio of products, such as monitoring services (CloudWatch), Auto Scaling that automatically grows infrastructure as load increases and shrinks it as it becomes unnecessary, and a multitude of web appliances built by specialists that you can launch. As an example, we have recently started investigating aiCache as a scalable frontend SSL offloader. They have AMI's available to launch at your disposal and have put together a good solution - http://aicache.com/blog/last-man-standing-cnbc-soars-on-aicache-dyn/).
We recently had a successful launch at Finovate in San Francisco, and fearing the TSPOP (Too Small Pizza Oven Problem), we decided to migrate our infrastructure to AWS. Scale-up. Scale-down. No problem.
Read more about the launch at http://www.jemstep.com/blog/2010/05/jemstep-has-successful-demo-at-finovatespring-2010/
*Finally* - the pizza's are ready. They are hot but it took way too long. Next time I'll rather order from jemstep.com.
Friday, January 15, 2010
Farmville, now in Linux
Installing any of mainstream linux distrubutions these days is a lot easier than a few years back, unless you want to get your hands dirty with a Gentoo "unsupported" stage 2 (which used to be the norm). The installation systems have really come a long way, and it doesn't take more than a an hour or 2 to have a mostly functional < server | desktop | laptop > up and running.
Working in a relatively small startup company, we have found that installing / re-installing is a relatively common process to go through; with advances in hardware, virtualization has boomed over the last 3 years and you could find yourself installing a few VM's over and over for ... whatever purpose you find fit. Here are some methods that could streamline installation of multiple Linux systems.
1) OS Installation media - PXE-me-up
PXE ('pixie'), or Pre-boot eXecution Environment, is a specification created by Intel that basically gives us stuff like network boot / network installs. It can be built relatively easily by a sys admin: all you need is a DHCP server, a TFTP Server and OS installation images (pxe-enabled of course). Check out http://menteb.org/tech/gentoo-pxe as 1 of many howto's.
2) Preconfigured installs with Kickstart (for the RH-type distros)
Kickstart is a Rehat product developed to automate OS installations (and used in similar distributions, such asd CentOS and Fedora). The configuration is saved in a file, served via any method accessible to the PXE kernel, such as HTTP. There is an extension configuration parameter set that can be used to tune your install, ranging from partitioning to software sets to install, with the manual available at http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/sysadmin-guide/ch-kickstart2.html. Use a scripting language (perl / python / php) to dynamic present kickstart files for even better dynamic installs. Post-installation scripts, set via the %post configuration, gets the host up and running the way you need it to, complete with centralized authentication configuration (keep reading).
3) Maintenance: local software repositories - Mirror mirror in the server room, what's the fastest of them all.
The joys of sub-standard infrastructure. While most of you probably live in a country where bandwidth flows more freely water through the Amazon, we do not all share these small pleasures. Here in sunny South Africa, bandwidth is slow. And expensive. With as little as 10 hosts of the same distribution (and major version perhaps), it would be probably be a good idea for the average small to medium business to have a local repository of software updates, mirrored from upstream providers. The initial sync can be painful. Right now, we have a mirror of the latest release of CentOS (5.4) and Fedora (12), which is a little over 100GB. Once it's set up, a daily rsync will ensure that you always have the latest updates at your finger tips. Until the next release of course.
4) Package management
I love having the latest and greatest software versions on test systems - new features. Software crashes. Really broken dependency trees. Fun fun fun. Why? Source installations - the bane of the inexperienced system administrator. Most distributions have package management for a reason - it ensures integrity of software. It's been tested. Many hours are spent by the distribution developers ensuring that everything fits together nicely for you so you don't have to worry about rebuilding 40 different libraries manually. Keep to yum, up2date, apt and portage if you can. And if you do need something thats not available, its not that difficult to set up your own repository. (rpmbuild + createrepo for RH distributions are great tools to familiarise yourself with).
Lastly...
5)Centralized authentication
My first home network: 1 gentoo "server" *cough* celeron *cough* (which is still running today, 6 years later) and a Windows 2000 notebook. And centralized authentication with LDAP. And a Samba PDC. Those where the days, plenty of time to spend. This is really only useful if there are more than 2 or 3 people needing authorized access to 10+ hosts and a handful of services (web applications, mail, source).
One password to authenticate all, one password for them all. One password to authorize all and in the network bind them.
Working in a relatively small startup company, we have found that installing / re-installing is a relatively common process to go through; with advances in hardware, virtualization has boomed over the last 3 years and you could find yourself installing a few VM's over and over for ... whatever purpose you find fit. Here are some methods that could streamline installation of multiple Linux systems.
1) OS Installation media - PXE-me-up
PXE ('pixie'), or Pre-boot eXecution Environment, is a specification created by Intel that basically gives us stuff like network boot / network installs. It can be built relatively easily by a sys admin: all you need is a DHCP server, a TFTP Server and OS installation images (pxe-enabled of course). Check out http://menteb.org/tech/gentoo-pxe as 1 of many howto's.
2) Preconfigured installs with Kickstart (for the RH-type distros)
Kickstart is a Rehat product developed to automate OS installations (and used in similar distributions, such asd CentOS and Fedora). The configuration is saved in a file, served via any method accessible to the PXE kernel, such as HTTP. There is an extension configuration parameter set that can be used to tune your install, ranging from partitioning to software sets to install, with the manual available at http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/sysadmin-guide/ch-kickstart2.html. Use a scripting language (perl / python / php) to dynamic present kickstart files for even better dynamic installs. Post-installation scripts, set via the %post configuration, gets the host up and running the way you need it to, complete with centralized authentication configuration (keep reading).
3) Maintenance: local software repositories - Mirror mirror in the server room, what's the fastest of them all.
The joys of sub-standard infrastructure. While most of you probably live in a country where bandwidth flows more freely water through the Amazon, we do not all share these small pleasures. Here in sunny South Africa, bandwidth is slow. And expensive. With as little as 10 hosts of the same distribution (and major version perhaps), it would be probably be a good idea for the average small to medium business to have a local repository of software updates, mirrored from upstream providers. The initial sync can be painful. Right now, we have a mirror of the latest release of CentOS (5.4) and Fedora (12), which is a little over 100GB. Once it's set up, a daily rsync will ensure that you always have the latest updates at your finger tips. Until the next release of course.
4) Package management
I love having the latest and greatest software versions on test systems - new features. Software crashes. Really broken dependency trees. Fun fun fun. Why? Source installations - the bane of the inexperienced system administrator. Most distributions have package management for a reason - it ensures integrity of software. It's been tested. Many hours are spent by the distribution developers ensuring that everything fits together nicely for you so you don't have to worry about rebuilding 40 different libraries manually. Keep to yum, up2date, apt and portage if you can. And if you do need something thats not available, its not that difficult to set up your own repository. (rpmbuild + createrepo for RH distributions are great tools to familiarise yourself with).
Lastly...
5)Centralized authentication
My first home network: 1 gentoo "server" *cough* celeron *cough* (which is still running today, 6 years later) and a Windows 2000 notebook. And centralized authentication with LDAP. And a Samba PDC. Those where the days, plenty of time to spend. This is really only useful if there are more than 2 or 3 people needing authorized access to 10+ hosts and a handful of services (web applications, mail, source).
One password to authenticate all, one password for them all. One password to authorize all and in the network bind them.
Labels:
guidelines,
howto,
installation,
kickstart,
ldap,
linux,
pxe,
rpmbuild
Subscribe to:
Posts (Atom)