| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

TheHardwareDilemma-ES

Page history last edited by macagua 13 years, 2 months ago

El Dilema del Hardware

Así que usted no esta a la moda tecnológica y que desea ejecutar el sistema en el hardware real en lugar de la ejecutarlo en cloud / virtual machine / blah. No se avergüence - hay un montón de razones para hacer esto * y si lo ha hecho usted bien podría hacer alarde de esto!. Si usted está en implementando cloud o en la sala de servidores, usted tiene que decidir cómo distribuir el software en un esquema de publicación y desligue para obtener el mejor partido de su inversión cuando se trata de la utilización del hardware.

 

Lo ideal sería que todos podemos sentarnos en nuestras sillas de escritorio cómodas y pensar en todo el software que desee utilizar y todos los planes hermosos de alta disponibilidad y luego podemos ir por todo el hardware que necesitamos para cumplir estos sueños yada yada yada en la tienda de comestibles. Pero en realidad, muchos de nosotros estamos limitados en los presupuestos de nuestros equipos y tenemos que empezar con el hardware y el ajuste del software para tal fin. Para este debate, voy a suponer que es el caso, pero no dude en adoptar esta técnica simple para el resto de la planificación de su necesidad.

 

La forma más fácil de optimizar los recursos es dibujar un diagrama sencillo, poniendo su pila de software potencial / deseada en el diagrama de este modo:

 

 

 

Y al igual que ver por qué el Squid podría ser una fuga en su sistema si sólo tiene una caja! Si dispone de dos cajas por ahí puede que desee considerar la posibilidad de Zope y Apache en cajas diferentes para tener un mejor control de la unidad de la CPU. Tome un pedazo de papel y jugar el tiempo que sea necesarioa lo largo de en su casa!

Tenga en cuenta que si su Zeo y las instancias de Zope se encuentran en la misma caja, usted reducirá sus procesos en disco porque ya no tiene la necesidad de una caché local base de datos. Es necesariamente posible para todas las instalaciones, especialmente los más grandes, pero las configuraciones de instalaciones pequeñas y medianas deben tomar ventaja de este esquema.

 

Relativas a las solicitudes por segundo

There is also the question of how man requests/second you need to be supporting.  This way you can get an idea as to how many boxes, real or virtual, you need to handle the response. This is a harder question, and I personally think most benchmarks are impossible to compare/trust/believe because they aren't necessarily measuring "real" page renders. A "real" page render is not a gif or js or css and it is NOT cached - it's a moderately complex page render. Remember, zope should only serving up pages in your system that need zope to be rendered. If you get your info from a consistent source you are better off but everything varies so much its just about impossible to get a realistic number. If you have tools setup - yay - use them. For those that are getting started, here's a quick and dirty guestimation technique that's worked really well for me in the past. This is definately an overestimate but at least you shouldn't get any nasty surprises this way.

 

First let's think about what types of pages we are rendering and classify them according to complexity. If you are doing any custom development, chances are you are generally doing moderately complex page renders (very complex if you have crappy coders). Edits are very complex page renders. Searches are anywhere from moderately complex to disgustingly complex. Don't even think about counting reporting. In general, views are simple. 

 

Now think about what percentage of each you will have. In a tough app that gets a decent number of writes, you may look at 40% very complex, 30% moderately complex, 30% easy renders. With a Plone 2.5 baseline (theoretically subtracting 30% for each major release after that if you follow the trend lines) you can safely assume 1 second to render a very complex view, .5 second for a moderately complex, and .25 seconds for a simple view. If you had 100 requests then, it would take 40 + 15 + 7.5 = 62.5 seconds of server rendering time, or .625 requests/second per zope instance. Seems accurate for the 2.x series, no? (Note: these numbers are artifically high because every system is different - don't get mad if you get more or less).

 

Almost there. Now think about your PEAK load - how may users will be using the system simultaneously? For simplicity, assume they can view one page every 2 seconds. If you have 100 simultaneous users always rendering a fresh page view, you then have 50 requests per second. To handle this load, you need 50 * .625 = 31 zope instances. If you oversubscribe your zopes 2:1 to the number of CPU's and each one of your boxes has 4 CPUs, then you are looking at 7-8 machines to handle those zope renders. That seems like a lot of hardware, but remember this is the raw rendering of zope application logic and when you incorporate all the caching and other system components, your total system throughput is going to be many more requests/second. 

 

Remember, zope is not apache and its doing some amazing logic behind the scenes. Be patient with her and keep in mind that each release is getting better and better w.r.t speed.

 

* There is a nice long discussion on virtualizing Plone in google wave. For the life of me I don't know how to give you a url for it but its public and just search for "Virtualizing Plone" to get the skinny. I'm not sure if I sure dig out the pros and cons of a virtual environment here - anyone interested in that?

 

Servicios, Cacheo, y Maximizando la memoria RAM

In the past I have preached a lot about having all service and async requests routing to a non-user servicing zope. There is a lot of merit to this and I saw the difference in response time immediately when this got changed. HOWEVER, it also meant that the servicing zopes had a cache for every client. In our case, certain zopes service certain clients (thanks HAProxy!) in order to optimize the speed of response by hitting an already filled cache. Makes sense. But then the service zopes had to have a cache for everyone, one that was often 4 times the size of any other zope. That became a waste of memory that was better utilized for client based zope and a bunch of disk requests to fill a cache. Additionally, services responded much slower because they were waiting for disk instead of utilizing an already existing cache. In the end, we switch back to having services on the same zopes as user requests.

 

If you have a lot of services or async requests, consider the advantages and disadvantages of having a zope just for handing services. Long lived requests will slow down your end users. That makes us respond with - ok we'll but them on there own zope. But then, the requests become even longer because of cache priming and your RAM gets chowed by service zopes. In the end, the best solution is to dump those long live requests by any means necessary. In the case that you can't, measuer carefully and consider all options.

Comments (0)

You don't have permission to comment on this page.