On 09/14/2011 09:51 AM, Jeff Boyce wrote:
I will be getting a new server for my company in the next few months and am
trying to get a better understanding of some of the basic theoretical
approaches to using Centos KVM on a server. I have found plenty of things
to read that discuss how to install KVM on the host and how to install a
guest and setup the network bridge, and all the other rudimentary tasks.
But all of these how to's make the assumption that I have a philosophical
understanding of how KVM is incorporated into the design of how I use my
server, or network of virtual servers as the case may be. So in general I
am looking for some opinions or guidance from KVM experts or system
administrators to help direct me along my path. Pointers to other HowTo's
or Blogs are appreciated, but I have run out of Google search terms over the
last month and haven't really found enough information to address my
specific concerns. I suspect my post might get a little long, so before I
give more details of my objectives and specific questions, please let me
know if there is a better forum for me to post my question(s).
The basic questions I am trying to understand are, (1) what functions in my
network should my host be responsible for, (2) what functions should
logically be separated to different VMs, and (3) how should I organize
disks, raid, LVM, partitions, etc to make best use of how my system will
function? Know I know these questions are wide open without any context to
what the purpose(s) of my new server are, and my existing background and
knowledge, so that is the next part.
I am an ecologist by education, but I manage all the computer systems for my
company with a dozen staff. I installed my current Linux server about 7
years ago (RHEL3) as primarily a Samba file server. Since then the
functions of this server have expanded to include VPN access for staff and
FTP access for staff and clients. Along the way I have gained knowledge and
implement various other functions that are primarily associated with
managing the system, such as tape backups, UPS shutdown configuration, Dell's
OMSA hardware monitoring, and network time keeping. I am certainly not a
Linux expert and my philosophy is to learn as I go, and document it so that
I don't have to relearn it again. My current server is a Dell (PE2600) with
1 GB RAM and 6 drives in a RAID 5 configuration, without LVM. I have been
blessed with a very stable system with only a few minor hiccups in 7 years.
My new server will be a Dell (T610) with 12 GB RAM, 4 drives in a RAID 5
configuration, and an iDRAC6 Enterprise Card.
The primary function of my new server hardware will be as the Samba file
server for the company. It may also provide all, or a subset of, the
functions my existing server provides. I am considering adding a new
gateway box (ClearOS) to my network and could possibly move some functions
(FTP, VPN, etc.) to it if appropriate. There are also some new functions
that my server will probably be responsible for in the near future (domain
controller, groupware, open calendar, client backup system [BackupPC]). I
am specifically planning on setting up at least one guest VM as a space to
test and setup configurations for new functions for the server before making
them available to all the staff.
So to narrow down my first two questions how should these functions be
organized between the host system and any guest VMs? Should the host be
responsible just for hardware maintenance and monitoring (OMSA, APC
shutdown), or should it include the primary function of the hardware (Samba
file server)? Should remote access type functions (FTP& VPN) be segregated
off the host and onto a guest? Or should these be put on the gateway box?
I have never worked with LVM yet, and I am trying to understand how I should
setup my storage space and allocate it to the host and any guests. I want
to use LVM, because I see the many benefits it brings for flexible
management of storage space. For my testing guest VM I would probably use
an image file, but if the Samba file server function is in a guest VM I
think I would rather have that as a raw LV partition (I think?). The more I
read the more confused I get about understanding the hierarchy of the
storage (disks.RAID.[PV,VG,LV].Partition.Image File) and how I should be
looking at organizing and managing the file system for my functions. With
this I don't even understand it enough to ask a more specific question.
Thanks to anyone that has had the patients for reading this much. More
thanks to anyone that provides a constructive response.
Chris and Eric bring up interesting options, which deserve consideration
and understanding. I don't necessarily disagree with anything they've
said, but I've taken a different tact with virtualization of SMB
servers. Disclaimer: while I've been using VMware Server2 in the past,
I'll be migrating to KVM/CentOS soon, and what I'd like to write about
here is independent of platform.
Let me begin by agreeing with Chris that the host machine should do only
what's needed to be done by the host. The only service (per se) that I
run on a host is ntp. VMs typically have difficulties keeping track of
time, so I run an ntp server on the host, and all VMs use the host time
server for synchronization.
Generally speaking, virtualization brings a certain freedom to how
various pieces software are brought together to act in concert. Think of
it this way. If there were no limits to the number of servers you could
have, how would you do things then?
One might organize servers by network topology, in which case there
would be a network server (gateway, firewall, vpn, dns, dhcp), a WAN
server (web, ftp, mail), and a LAN server (file services, domain
controller). This fits nicely with the IPCop model of a firewall.
One might then subdivide these servers further into roles, in which case
the WAN server could be replaced by separate web, mail, and ftp servers.
The LAN server could be split into a separate data server and domain
Now, take all of your ideal logical servers (and the networking which
ties them all together), and make them VMs on your host. I've done this,
and these are the VMs I presently have (the list is still evolving):
.) net (IPCop distro, provides network services, WAN/DMZ/LAN)
.) web (DMZ/STOR)
.) ftp (DMZ/STOR)
.) mail (DMZ/STOR)
.) domain control (LAN/STOR)
.) storage (LAN/STOR)
One aspect that we haven't touched on is network topology. I have 2 nics
in the host, one for WAN and one for LAN. These are both bridged to the
appropriate subnet. I also have host-only subnets for DMZ and STORage.
The DMZ is used with IPCop port forwarding giving access to services
from the internet. The STOR subnet is sort of a backplane, used by
servers to access the storage VM, which provides access to user data via
SMB, NFS, AFP, and SQL. All user data is accessed via this storage VM,
which has access to raw (non-virtual) storage.
Which brings us to storage. With the size, speed and affordability of
hard drives these days, I believe that raid-5 and LVM are not well
suited for a SMB server.
Raid-5 requires specialized hardware to do efficiently, and I don't like
the idea of storage being tied to a single vendor if not absolutely
necessary. You can't simply take the drives from a hardware raid array
and put them on another manufacturer's controller. With software raid
you can, but software raid-5 can be burdensome on the cpu. So I prefer
raid-1 (or raid-10 if needed), which I always use, exclusively.
LVM provides flexible storage in a single host environment, but with
virtualization, I fail to see the point/benefit of it. Raid-10 allows a
filesystem to grow beyond the limits of a single disk. Other than that,
what does LVM really do for you? Would you rather have a VM reside on
the host in its own LV, or simply in its own directory? I'll take the
directory, thank you. In a virtualized environment, I see LVM as
complexity with no purpose and have ceased using it, both in the host
and (especially) in the VMs.
In the configuration of VMs described above, I use 2 (small, 80G would
overdo it) drives for the hosts and system software (raid-1), and all
other (as big as you need) drives for data storage (raid-10). You can
think of the data storage server as a virtualized NAS machine, and you
wouldn't be far off. This way, space requirements are easily managed.
Allowing 8G for each machine (host plus each VM) is more than ample. If
you don't have Xwindows on a machine, 4G would be sufficient. Your user
data space can be as big as you need. If/when you outgrow the capacity
of internal drives, you can move the storage VM external to a separate
host that has adequate capacity.
Which brings us to another aspect to this configuration: it scales well.
As the demands on the server approach its capacity (after appropriate
tuning is done), the resource intensive parts can be implemented on
additional hosts. If/when you reach that point will depend on the number
of users and types of demand that are put to it, as well has how well
your servers are tuned.
This all may be a bit more complicated than you intend to get, but you
can take it for what it's worth. If you'd like to discuss the
possibilities further regarding your situation, please feel free to
subscribe to the list athttps://lists.tagcose.com/mailman/listinfo/list
for help with such a
server. I hope to 'see' you there. :)