Personal tools

Contact Us 24/7 > 1 866.SIX FEET
Sections

Skip to content. | Skip to navigation

Home > Blog > High(er) Performance Single Server Hosting
12/01/16

EVERYONE.NET SCHEDULED MAINTENANCE 

Everyone.net will be performing maintenance on their databases Friday December 2nd, 2016 between 9:00PM PT to 3:00AM PT / 12:00AM ET to 06:00AM ET. During this time, all services including web mail, POP, IMAP, and SMTP relay may experience degraded performance and inbound mail delivery delays. We apologize for any inconvenience.

Blog

High(er) Performance Single Server Hosting

written by lars on Friday August 6, 2010

The following post talks a bit about the hosting environment I built for my personal site. Please feel free to ask me questions on the following configuration and I will do my best to answer them. Now, let's light this candle.

Lnoldan.com in the past has been a testbed I've used for getting more familiar with technologies used during my day job at Six Feet Up. I recently re-deployed the site on Plone4 Beta 3 with RelStorage using some of the technologies we deploy daily to create the best hosting environment possible for this site. The attached diagram shows the pieces I am using, and how they are connected to one another to host this site.

null

At the head of the hosting chain we have the nginx web server. This is configured primarily to manage virtual hosting between lnoldan.com and my soon-to-be deprecated w9zeb.org WordPress site. It also manages SSL for encrypted site access and authentication.

If the web request is for w9zeb.org, it is passed to a FastCGI process, which in turn connects to the MySQL database on the back end. Nginx isn't able to parse PHP content by itself, which is the reason for the FastCGI process. I hope to move the content from that WordPress blog into Plone in the near future.

If the web request is for lnoldan.com, it is passed to my varnish caching server. If varnish has the request in cache, it will serve the request directly, which is very fast. If varnish did not have the request cached, it goes to its back end, which in this case is another running nginx process. This instance of nginx is where I have the VirtualHostMonster configured for Plone. The advantage here is that the URLs being processed by varnish are identical to the URLs the end user sees in his or her browser address bar.

The VirtualHostMonster proxypass sends traffic to HAProxy load balancer, which sets a cookie to help determine which back end any given session should be re-directed to on future requests. This balances traffic to four Zope 2.12 instances. These instances are configured to use RelStorage with PostgreSQL as the back end datastore for the ZODB. I have also configured RelStorage to use memcached.

The method currently used by RelStorage to interact with memcached isn't quite ideal, but it's functional. When a request comes in, the Zope instances first query memcached. If the data is there, it serves it. If memcached does not have the data, RelStorage sends the query to PostgreSQL. Once the response is received from PostgreSQL, RelStorage sends the response to memcached so it's available for future requests. Simultaneously the request is processed by Zope and sent upstream towards the end user.

Hopefully this makes sense and provides ideas to help solve your hosting needs as well. The specifications on this Virtual Private Server are as follows: I am running FreeBSD 8.0-Release (32bit) with 1.5gb of memory, 2 virtual CPUs, and 45Gb of disk space. I am using supervisord with memmon to manage the four running Zope instances. Memmon is configured to allow not more than 150mb of memory to be used per instance before restarting them. All other services are configured to run via the FreeBSD rc system.

If you would like to see any of the configuration files in this hosting stack, including the buildout for the site you can email me at: lars@sixfeetup.com.

Cheers!

 
Posted by on Aug 06, 2010 08:52 AM
Can you expand on this: "This instance of nginx is where I have the VirtualHostMonster configured for Plone. The advantage here is that the URLs being processed by varnish are identical to the URLs the end user sees in his or her browser address bar." Why is this an advantage? Why did you not do the rewrites in Varnish or the nginx in front of Varnish? My main concern with having another nginx would be that it requires another open file descriptor for each request. With high concurrency, you risk running out of descriptors, at least unless you up the operating system's standard limits. Perhaps less of an issue on FreeBSD than Linux, though. Martin
Posted by on Aug 06, 2010 10:33 AM
Hi Martin, The reason we don't do the rewrites in Varnish is due to the complexity of the varnish configuration. You have to aproximately double the length of the Varnish configuration for each virtual domain you want to host which becomes unmanageable. Regarding the second instance of nginx to put the VHM behind varnish. In our experience VHM purging works very poorly, if at all. Not having to purge the VHM addresses, but rather the standard url dramatically simplifies things and makes them far more reliable. As for hitting the file descriptor limits. I would expect on a high concurrency system for the vast majority of requests to be served by the front nginx, and varnish only. Due to the performance limitations in zope, at 2 threads per instance it is highly unlikely the default max file descriptor limit of 12,328 would be reached. Thanks! Lars
Posted by on Aug 06, 2010 12:35 PM
Hi Lars, Well, we hit that limit. ;-) I'm not sure why you're seeing purging not working with VHM. I assume you're using CacheFu? At least in plone.app.caching/plone.cachepurging, it should work. At the end of the day, Zope needs to construct a PURGE request with the same path as the inbound request Varnish saw. Zope actually has enough information to do this (presuming no other path transform in-between), right there in the request. CacheFu I think does some convoluted string transform, but even there it should work. Martin
Posted by on Aug 11, 2010 06:02 PM
Do you really need the session affinity from ha proxy? It seems like you could just use nginx's builtin load balancer and avoid the extra intermediate step, unless you need session affinity (which could also be had by mounting the session storage from zeo). Although i admin, i don't see the purpose for the extra nginx behind varnish either.
Add comment

You can add a comment by filling out the form below. Plain text formatting.

puzzle

Next Steps


Select a type of support:

Contact our sales team

First name:
Last name:
Email:
Phone Number:
Message:
Fight spam:
What is + ?
 
Call Us 1 866.SIX FEET
Sections