Monthly Archives: August 2011

Quick and easy text size adjustment with jQuery


Lately I have been working on a site that deals with quite a bit of text. When it comes to choosing the default text size for a web site, as a developer I try to pick a happy medium between what lays out well on the page but is still easily readable. It would be nice to give the user the ability to adjust the font size to their liking.

As it turns out, this is dead simple to accomplish and using the jQuery UI slider it even looks pretty good too. First we will create an empty div that will contain our slider element and a target div that will contain some test text. Next we add the jQuery code to adjust the font size as the slider is moved.

Font size:
<span id="fontSz">100%</span>&nbsp;&nbsp;
<div id="fontSlider" style="width: 60%; display: inline-block;"></div>
<div id="adjustableText">
 Some test text in here
</div>
<script type="text/javascript">
 $(document).ready(function(){
  $('#fontSlider').slider({
   range: "min",
   min: 50,
   max: 200,
   value: 100,
   slide: function(event, ui){
    fontSize = ui.value;
    $('#adjustableText').css('font-size', ui.value + '%');
    $('#fontSz').html(ui.value + '%');
   }
  });
 });
</script>

That pretty much covers the code. Basically we are allowing the user to scale the text in the adjustableText div from 50% up to 200% and the font will adjust as the slider is moved. You can see this in action here.

It has been a while since I have posted anything on jQuery so this was a fun little project.

Using PostFix to send emails using Amazon SES


Of all the Amazon Web Services I use,  the Simple Email Service would have to be the service I use the most. ColdFusion allows me to easily create a component that sends emails pragmatically but, however, it would be much nicer to use the CFMAIL tag and be done with it. As it turns out, Amazon SES allows you to send a raw email so you can run PostFix  to relay the message through Amazon SES using a perl script.

I found a good tutorial on getting this configured here. I did have an issue getting perl to find the SES.pm file, but this post details how to work around that issue. The great thing about handling it this way is other applications on the server can send messages through the gateway, not just ColdFusion applications.

Improved connector for Nginx proxy to Railo


Edit: This post is outdated. Please see this post https://kisdigital.wordpress.com/2013/03/04/my-final-nginxrailo-connector/

I have been working with Nginx quite a bit in the last week and I have had a little time to fine-tune my configuration a bit. Here is my stock Railo configuration. I have saved the settings to their own file and include the connector in each separate server configuration that requires Railo. This also remaps the standard Railo administrator to a more secure location and optionally sets basic authentication.

 # /etc/nginx/railo_connector.conf
 # Block default Railo admin
 if ($request_uri ~* ^/railo-context){
  return 404;
 }

 # Hide the Railo Administrator and optionally lock down with password
 location ~ ^/hardtoguesslocation/(.*)$ {
  #auth_basic $host;
  #auth_basic_user_file /path/to/htpasswd;
  if($request_uri ~^/railo-context/admin){
   return 404;
  }
  location ~^/hardtoguesslocation/{
   rewrite ^/hardtoguesslocation/(.*)$ /railo-context/admin/$1 last;
  }
 }

 # Main Railo proxy handler
 location ~ \.(cfm|cfml|cfc|jsp|cfr)(.*)$ {
  proxy_pass http://127.0.0.1:8888;
  proxy_redirect off;
  proxy_set_header Host $host;
  proxy_set_header X-Forwarded-Host $host;
  proxy_set_header X-Forwarded-Server $host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Real-IP $remote_addr;
 }

Also, Nginx allows you to use variables in your server configuration so it allows you to easily create a “catch all” virtual host. You can quickly add a new website just by adding it to your server.xml in Railo with no additional configuration required unless you require domain specific rewrites etc. Here is an example server configuration with a default virtual host and a separate domain configured:

server {
 #Catchall vhost
 listen    80; ## listen for ipv4
 server_name _;
 root /var/www/$host;
 index index.cfm;
 access_log  /var/logs/nginx/$host-access.log;
 # Do not log missing favicon.ico errors
 location = /favicon.ico { access_log off; log_not_found off; }
 # Do not serve any .hidden files
 location ~ /\. { access_log off; log_not_found off; deny all; }
 include /etc/nginx/railo_connector.conf;
# End of catch-all Server Configuration
}

server {
 #A domain with custom handling
 listen    80; ## listen for ipv4
 server_name mydomain.com www.mydomain.com;
 root /var/www/mydomain.com;
 index index.cfm;
 access_log  /var/logs/nginx/mydomain.com-access.log;
 # Do not log missing favicon.ico errors
 location = /favicon.ico { access_log off; log_not_found off; }
 # Do not serve any .hidden files
 location ~ /\. { access_log off; log_not_found off; deny all; }
 # Handle FW/1 style SES urls (i.e. http:/domain.com/main/default/key/value)
 location /{
  try_files $uri $uri/ @ses;
 }
 location @ses{
  rewrite ^/index.cfm/$uri last;
 }
 include /etc/nginx/railo_connector.conf;
# End of custom Server Configuration
}

If you want the X-Real-IP reported correctly on the Railo server you will also need to add the RemoteIP valve to your Tomcat configuration.

Overall Nginx makes a very nice front-end to Railo and as always I welcome any comments or suggestions to make it better.

Retain remote address when proxying Railo with Nginx


I have been taking a long, hard look at Nginx recently.  First I was playing around with it as a load balancer and the ease of getting it setup really got my attention.  After playing around with my cluster for a while I needed something else to play with so I decided to remove Apache from my standard server configuration and added Nginx.

I quickly had everything setup and running, but I noticed the remote address in the CGI scope was coming back as 127.0.0.1 which is not exactly what I was looking for.  Looking at the proxy settings in my Nginx config I had set all the right proxy headers, but Tomcat was ignoring the proxy headers.  Doing a few quick searches I have seen this was an issue, but you could read the real ip address by examining the headers and pulling the appropriate header field, etc.  That is great, however I am lazy and I would prefer to do it automagically.

So I decided to do a little more searching.  As it turns out, Tomcat version 6.0.24 added a way to translate the X-Real-IP header and allow Railo to use that without having to do any header-fu.  All you have to do is add one line to your server.xml under the <Engine> container:

<Valve className="org.apache.catalina.valves.RemoteIpValve"  />

Done.

Dumping out the CGI scope you should now see the remote address of the user instead of the address of the proxy server. Hopefully this will save someone some time because I know it sure drove me crazy for a long time last night.

Installing yasm on Amazon Linux


I am currently working on a project that requires me to build ffmpeg locally on an Amazon Linux instance.  I did a repo search and could find nasm, but ffmpeg didn’t like it at compile time.  Here is how to get yasm installed.  I am documenting this because I will probably need it again.  It is assumed you have already installed git-core.

git clone git://github.com/yasm/yasm.git
cd yasm
./autogen.sh
./configure --prefix=/usr 
make
sudo make install

Git is a handy little tool.

Installing s3fs on RHEL/Centos


Lately I have been doing a lot of work with AWS t1.micro instances running Amazon Linux which seem to be based on RHEL/Centos.  Both Railo and ACF do a good job of interacting with Amazon S3 storage which definitely makes our jobs as developers easier, but what if you wanted to mount your S3 storage locally to have access to your files at the system level so you could actually work with them?  Luckily there is an open-sourced s3fs project that will allow you to do just that.

At the time of this writing, the current file release is s3fs-1.59.tar.gz.  The unfortunate thing is, s3fs requires Fuse 2.8.4 and the newest version available in the package repos is Fuse 2.8.3.  The first step is to get the newest version of Fuse and get it on the server.

wget "http://downloads.sourceforge.net/project/fuse/fuse-2.X/2.8.4/fuse-2.8.4.tar.gz?r=&ts=1299709935&use_mirror=cdnetworks-us-1"

Once the download is completed, extract it:

tar -xzvf fuse-2.8.4.tar.gz
cd fuse-2.8.4

If you are still on a stock install of Amazon Linux, at this point we will need some tools to get everything configured and compiled.

sudo yum groupinstall "Development Tools"

This will install the tools we need for a moment.  However, we will need to install some more packages to get s3fs to compile.  We might as well get them now:

sudo yum install curl-devel libxml2-devel openssl-devel mailcap

Now we should still be in the the fuse-2.8.4 directory, so now it is time to configure and compile Fuse.

./configure --prefix=/usr
make
sudo make install
sudo ldconfig
export PKG_CONFIG_PATH=/usr/lib/pkgconfig
pkg-config --modversion fuse

If everything went as planned, pkg-config should return 2.8.4.

Next we need to download and install s3fs.  First we need to get and extract the archive:

cd
wget http://s3fs.googlecode.com/files/s3fs-1.59.tar.gz
tar -xzvf s3fs-1.59.tar.gz
cd s3fs-1.59
./configure --prefix=/usr
make
sudo make install

The installation should now be in working order.  The next step will be to decide how you would like to create your password file for s3fs.  You can either create a site wide password file /etc/passwd-s3fs or you can create one just for your user account ~/.passwd-s3fs.  The files are required to be secure, so if you go with the system wide password file be sure to chmod 640 /etc/passwd-s3fs or if you want to use your user account, chmod 600 ~/.passwd-s3fs.  The format for the files is the standard [AccessKey]:[SecretKey].

Finally, lets map the drive to a local directory.  In my home directory I created a folder name s3storage that will be my mount point.  We create the mount with

s3fs [bucketname] ~/s3storage -o default_acl=public-read

I have only set this up on one machine so I still do not have the install down completely, but I was able to get this up and running successfully.  All the steps above are more or less from memory so I apologize for any hazy steps.  I will correct as needed.