Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Looking for procurve design guide

Status
Not open for further replies.

GM2005

ISP
Sep 28, 2005
118
GB
Hi

I am trying to help an architecture business with latency issues due to the power-user environment. There are 90+ Microstation XM users. This is a resource hungry product, it links up to 20 files to the one being worked on, and refreshes them on the origin server constantly. They have major latency issues.

Does anyone know of a helpful best practice or case study guide, preferably for procurve, but including server deployment? I'm not angling to sell them anything, but need to understand what others have done in these environments. If it's uncomfortable for them (10-gig) then so be it.
 
Are you sure this is a network latency issue and not a server latency issue? With that many file locks going on...


"We must fall back upon the old axiom that when all other contingencies fail, whatever remains, however improbable, must be the truth." - Sherlock Holmes

 
Looks like it to me. The problem is I want to make sure they get the last word on how to improve things. They have had a lot of advice and are asking why there is no improvement. A server guy came there this morning and declared there were no issues, but I'm unconvinced. What I need is a real world example of how to approach this environment from both LAN and server angle's. Currently a 45Mb file is taking 43 seconds to open on a desktop running (insanely) dual 64-bit processors and 4Gb of RAM.
 
One test you could do to isolate a network or server issue is to run that copy test after hours using a crossover cable from pc directly to server. If that copies fine then you have possibly a cabling/switching issue.
Try that and let us know how it goes.
 
You might also want to get Bentley involved in this. Given the number of seats of Microstation involved, it's almost certain that the user has a current maintenance agreement (Sentley calls this SELECT.)

I have seen issues like what you're describing, and it usually comes down to something like invalid xrefs in the model. Note, I have a few models that are a couple of hundred MB in size with a few dozen xrefs running on a fairly poorly designed network (working on it...) on machines with half the capacity you're talking about, and do not see many problems like this.
 
Currently a 45Mb file is taking 43 seconds to open on a desktop running (insanely) dual 64-bit processors and 4Gb of RAM.

As far as network latency goes, my wristwatch will download as fast as one of my servers. The computing muscle of a client workstation is largely meaningless.

A simple test would be to simply copy a 45MB file from that server to that workstation and see how long it takes. If that copies in a reasonable time, you don't have network latency issues.


"We must fall back upon the old axiom that when all other contingencies fail, whatever remains, however improbable, must be the truth." - Sherlock Holmes

 
I am in the process of getting Bentley onboard. What I want though is one of those best practices documents like Microsoft produce for Exchange etc. I personally believe it is a server access issue, as I believe Microstation links a lot of objects together when working on a project.

They have a single MAC server holding all the files which, given there are 80 users, doesn't seem adequate to me.
 
You mentioned your client is an AE, so yes, I think it's fair to say that most of their models are going to be fairly complex (structural, electrical, piping, heck, they probably even have a layer for the landscaping) and that there will be a decent number of references. Still I think that LawnBoy has the right of it when he says that you need to benchmark. Remove the application from the mix and see how fast you can push raw data.

Fast ethernet to the desktop, with a gigabit backbone will probably be enough for their needs. "One server" may be enough, depending on what kind of box we're talking about. But the idea that they're reticent about possible replacing this infrastructure is crazy. Realize that the "90+ seats" of microstation they're running cost them in the neighborhood of half a million bucks, and the workstations (from your description) ran at least another $150k on top of that.

In raw numbers, it comes down to this: "You spent the better part of three quarters of a million dollars ensuring that your architects and engineers could get their jobs done. It's going to cost another 5% to maximize that investment."

To address your actual question, though, see:


The above two links have case studies, white papers, datasheets, design guides, etc. Basically, everything you are looking for. Be prepared to invest a lot of time, though, because we're talking about tens if not hundreds of thousands of pages of information.
 
Thanks jkupski. That is exactly what I wanted. Also on the xrefs. A colleague put that point to their design manager today and he admitted they may have problems there.......



 
GM2005, just wondering if you have an update on this one? I'm kind of curious to see where this is all heading.
 
Hi
I made some suggestions:

Server Clustering - Short term
Engagement of Bentley, based on the suggestion regarding the number of licenses I suggested that they should be able to at least engage them for guidance (they agreed) - Short term
Further VLAN segmentation - Short term

Additionally, they accepted your points from the 10th regarding xrefs etc (thanks for that one).

I recommended that mid term they plan for multiple uplinks from the access switches, with multiple paths for aggregated uplinks and fault tolerance. I pointed out that 10-gig uplinks are dropping in price and if they are expanding they will need to look to that in the future.

I am implementing in depth SNMP and syslog reporting of the servers and network performance.

We have blocked outbound internet access to any bandwidth sapping apps that we found, and persuaded them to have a software audit and policy, and to initiate a strict change management procedure.

Some interesting findings were:

It took 7 seconds to copy a 50 Mb file from the file server to the worst affected workstation's desktop through windows explorer during a peak hour. I consider that acceptable by any standard given the environment.

They have no STP on their core switch!!!!!!

Since we floated the suggestion that they locate rogue apps the performance has improved.

They have VOIP and no layer 2 QoS.

They have other shared apps/data on the file server.

User perception is a big factor and a lot of the users are using mac out of work and the organisation is a windows house. They are parochial about it.


 
That sounds like one HUGE can of worms you have there. Looks like you have a decent plan of action, though--here's hoping that the client follows your suggestions!

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top