I ned to have 100 users on a terminal server running office 2000 I need to know how to scale proceesor for this load. Does anyone know a safe base for each user.
Yep, allow at least 256Mb of RAM for the server plus 32-64Mb of RAM per user (recommend 1Gb RAm total).
Use separate disks for Win2k, swap, apps and data on a fast SCSI array (or arrays). Do not use IDE...too slow...allocate 2 x RAM for a swap file on its own partition.
Go for a FAST processor (recommend 2 x 866Mhz minimum) and make sure that you have access to a switched Ethernet at 100Mbit/s using a "quality" (name brand) network card.
Do not make the machine a DC and you should be flying.
This same configuration supports 100 users at one of my sites and users are blown away by the performance improvement...for the first day and then they forgot how slow their old system was!
I'm using a RAID 5 on a 9.1 Ultra 2 SCSI This will only be a app server. I will have a SAN set up for my data. There is a separate server for authentication.
Dimishing returns says that because each instance of the application is cached in RAM then it is not linear scale when calculating RAM per user. However, each user runs their instance in their own memory space so any RAM added to the server provides more "headroom" for the users to play in.
Certainly it will run with a lot less RAM but the penalty in swap time will be enormous. I have suggested a proven setting (1Gb) as a realistic compromise.
CPUs sound OK though and app server plus SAN equals happy adminsitrator, good luck!
My suggestion would be to use at least 2 Terminal Servers for 100 users, and get dual proc boxes instead of a single quad box.
RAM = 128Mb (at least) for the O/S, and at least 32Mb per user. Use Perfmon to find out actual user loads, since applications consume differing amounts of resources.
RAID 5 is not necessary on a T/S - I'd use RAID 1.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.