From: Jeff Yana <jyana@(email surpressed)>
Subject: Looking For Any Feedback on Top Tier Storage Products
   Date: Sun, 18 Nov 2007 16:43:21 -0500
Msg# 1641
View Complete Thread (6 articles) | All Threads
Last Next
Dear List-

I am looking to get feedback from anyone with first hand, field experience
using currently shipping products from the following vendors: Isilon, Blue
Arc, Sun, SGI, Agami and OnStor.

I am currently evaluating products from the first two vendors, but desire
feedback from those currently functioning in a support role in a POST-SALES
environment. Ideally you will have had opportunity to configure, maintain,
monitor and interface with the vendor's TAC group, and can comment about the
quality of their support as well.

If you use any other products not listed above but find them to be
exceptional, please feel free to comment on those as well.

Any comments would be most appreciated.

Thank you.

Jeff Yana 


   From: Greg Ercolano <erco@(email surpressed)>
Subject: Re: Looking For Any Feedback on Top Tier Storage Products
   Date: Sun, 18 Nov 2007 17:42:50 -0500
Msg# 1643
View Complete Thread (6 articles) | All Threads
Last Next
Jeff Yana wrote:
> [posted to rush.general]
> 
> 
> Dear List-
> 
> I am looking to get feedback from anyone with first hand, field experience
> using currently shipping products from the following vendors: Isilon, 
> Blue Arc..

	Jeff, I know you omitted NetApp from your list because you're already
	familiar with their hardware, and have weighed in on it before:
	http://seriss.com/cgi-bin/rush/newsgroup-threaded.cgi?-view+743
	http://seriss.com/cgi-bin/rush/newsgroup-threaded.cgi?-view+746

	I know Net Apps have saved many companies in the past,
	though now I hear BlueArc and IsoLan mentioned a lot more these days.

	I think Saker has stories to tell about IsoLan that you missed
	at one of the last Sysadmin Bash meetings at the Cat & Fiddle.
	And I think Rob Minsk is quite familiar now with BlueArc.
	Anyone else, feel free to chime in.

> Sun, SGI, Agami and OnStor.

	SGI? Are they still around? ;)

	I wouldn't usually advise investing in hardware companies that
	have filed for bankruptcy.

	That said, back in 2001 I helped setup a render farm config at ICT
	which involved a used rack mount R10K 2 proc 225MHz SGI Origin 200
	from GET.COM for ($6750), and two 450G ADTX raid5 systems (for $6000 each)
	from RFX.COM hanging off it.

	At the time I knew of several CG companies using ADTX raids with the
	SGI, and had gone through failure modes with them, and they worked
	quite well. And of course RFX support had always been great, so we liked
	that aspect. Within a few weeks one of the two raids had a bad drive,
	the raid went into "degraded" mode but continued file serving.
	I hot-swapped in a new drive while a 16 proc Onyx was rendering
	a heavy job on the raid, and the new drive configured itself and
	jumped online.

	For switching, the network admin at ICT wanted to go with an
	Alied Telesys switch with dual fiber uplinks to the Origin.

	I think Rob Groome later inherited that whole setup after I finished
	my contract work for them, so if he's still tuned into the group,
	he can maybe weigh in on how all that carried for the years after,
	or if it all had to be thrown out of a window onto Lincoln Blvd.
	It'd be fun to know if that setup is still in place.

-- 
Greg Ercolano, erco@(email surpressed)
Rush Render Queue, http://seriss.com/rush/
Tel: (Tel# suppressed)
Fax: (Tel# suppressed)
Cel: (Tel# suppressed)

   From: Jeff Yana <jyana@(email surpressed)>
Subject: Re: Looking For Any Feedback on Top Tier Storage Products
   Date: Sun, 18 Nov 2007 18:56:26 -0500
Msg# 1644
View Complete Thread (6 articles) | All Threads
Last Next
 
> Jeff, I know you omitted NetApp from your list because you're already
> familiar with their hardware, and have weighed in on it before:
> http://seriss.com/cgi-bin/rush/newsgroup-threaded.cgi?-view+743
> http://seriss.com/cgi-bin/rush/newsgroup-threaded.cgi?-view+746
> 
> I know Net Apps have saved many companies in the past,
> though now I hear BlueArc and IsoLan mentioned a lot more these days.

Yes, I intentionally left out Netapp for this and other reasons. It's true
that Netapp is battle-tested and is a first rate-product. I believe their
software to be "best of breed". I like their products, truly. Unfortunately,
their product designs have not really kept pace with industry trends. No
where is this better demonstrated in their late arrival into the clustered
storage arena (they had to buy a company called Spinnaker before they could
ship their first clustered storage product). While they now have a clustered
product (ONTAP GX), they are not really pushing it, and word on the street
is because it is not really ready for production. Having said that, I
thought I heard somewhere that WETA (or was it ILM?) was in the process of
rolling out a massive GX deployment. So I could be wrong about this.

 
> I think Saker has stories to tell about IsoLan that you missed
> at one of the last Sysadmin Bash meetings at the Cat & Fiddle.
> And I think Rob Minsk is quite familiar now with BlueArc.
> Anyone else, feel free to chime in.
>

Great, would love to hear about it.

>> Sun, SGI, Agami and OnStor.
> 
> SGI? Are they still around? ;)

Oh yes. In fact, I think SGI is betting their turnaround on the highly
competitive storage market. As you know, they are really getting behind
Linux in a big way. Most company's today selling proprietary storage &
networking products are tapping into the open source community in some
fashion, so it is natural SGI does the same. Their products are still better
tuned for the academic and government space, but I think they are looking
more and more at the enterprise and SMB markets as well.
> 
> I wouldn't usually advise investing in hardware companies that
> have filed for bankruptcy.

Yes, but they are doing fine now. They were re-listed on Nasdaq last year,
and their stock price, while not at an all-time high, is healthy. More and
more they are making news, for good reasons, not bad. So yes, I think they
are still worth a look.
 
> That said, back in 2001 I helped setup a render farm config at ICT
> which involved a used rack mount R10K 2 proc 225MHz SGI Origin 200
> from GET.COM for ($6750), and two 450G ADTX raid5 systems (for $6000 each)
> from RFX.COM hanging off it.
> 
> At the time I knew of several CG companies using ADTX raids with the
> SGI, and had gone through failure modes with them, and they worked
> quite well. And of course RFX support had always been great, so we liked
> that aspect. Within a few weeks one of the two raids had a bad drive,
> the raid went into "degraded" mode but continued file serving.
> I hot-swapped in a new drive while a 16 proc Onyx was rendering
> a heavy job on the raid, and the new drive configured itself and
> jumped online.
> 
> For switching, the network admin at ICT wanted to go with an
> Alied Telesys switch with dual fiber uplinks to the Origin.
> 
> I think Rob Groome later inherited that whole setup after I finished
> my contract work for them, so if he's still tuned into the group,
> he can maybe weigh in on how all that carried for the years after,
> or if it all had to be thrown out of a window onto Lincoln Blvd.
> It'd be fun to know if that setup is still in place.

Their next-gen storage products are found in the InfiniteStorage series. I
believe these are all Linux-based NAS solutions. I was looking at this a few
years ago for one client, and passed on it mostly because of price.
Fortunately, today I work with clients where price is not so much as issue
any more.


   From: Saker Klippsten <saker@ZOICSTUDIOS.COM>
Subject: Re: Looking For Any Feedback on Top Tier Storage Products
   Date: Mon, 19 Nov 2007 19:05:15 -0500
Msg# 1645
View Complete Thread (6 articles) | All Threads
Last Next
Hey Jeff

Switch wise:

 I don't think anything can beat a force10 switch.  I don't have to think 
twice about recommending them at all.
Their support is top notch.
Its got a cisco like syntax and config so if you are famil with cisco this 
should be easy to learn.  We have a E1200
( Http://www.force10networks.com ) at the heart of our datacenter everything
plugs into this or the S50's via 10GiGE.

We had Extreme Switches and I cant say one good thing about them. Their 
products suck and their Support service sucks.
The technology used to power them is very very outdated. Its more software
based and that's the primary reason why it locks up
And cant handle the "extreme bandwidth" requirements of our field.  The force10
has not had 1 hiccup since we launched it. They are rock solid.
The E1200 has a 5TB Back Plane.

Storage wise:

We have over a 100TB's of Isilon. Been down the san route and I am not 
looking back.


Our Setup..
Over 1000 Procs and 160 or so workstations.

Isilon has been solid for us. Though like anything  its not without its hiccups
we paid that price 5 years ago when we helped alpha the product but today its
running great.

-Ease of use and management of the cluster I don't think there is a contender out there.
-Snapshots are nice easy to manage.
-Quota systems, Hard, Soft and Alert.
-Replication software is fast! ( SyncIQ )
-Aspera will run native on the Isilons we use this to manage and replicate 
our data to Vancouver disaster recovery cluster in case this log cabin burns
down :)  Its a software wan accelerator ( really amazing ) (It will increase
your wan link speed 100x )
-Adding storage is like popping in a removable drive. Connect a few cables
and power on.  Having redundancy is what I like the most.

While I am sure the single Stream performance is not as good as the newer 
bluearcs  (though I am sure its right behind it ) You do get redundancy with
your data but also the systems serving the data.  You have the options to
independently grow your performance over the size of your cluster by adding
Accelerator nodes or just storage or a combo of both.

-Isilon can monitor your cluster remotely if you opt too and alert you of 
 anything they think might fail like a disk that might start to fail they can
 predict this and soft fail ahead of time.

-I know everyone bags on SATA drives but I have had more Fiber drives fail
 in one year on my DDN and Flame Arrays than on our 40 Node Isilon Cluster.
 Which has about 480 drives in it : 4 failed in the last year.

-As you know it uses infiniband as its backend communication protocol and
 infiniband is slowly making its way into many other products we use here on
 the highend compositing side of things. While I do not know for sure it would
 seem likely that isilon could enable front end use of these infiniband
 ports for direct access or plug into another infiniband switch to enable 
 highspeed access to the cluster.  I know they have 10gige support or will 
 be soon in the form of an accelerator node or of the like. :)
 This would put off having to utilize the gige ports on each node  and just
 have 2 10gige fiber ports used to feed the network switch.

I am partial to Isilon just how some might be partial to Netapp or Bluearc.
But I am a sucker for ease of use and management functionality ohh and the
Blue Lights.

Price wise in the past Isilon has done better against the rest thought I 
know you might be able to get a good deal with Bluearc right now as they are
competing very aggressively for market share.

What it all comes down to is the environment get an Isilon in and test it 
out, Get a bluearc or a netapp and test it. I know its tuff sometimes to get
demos in but these days they are all eager to get another sale in. Isilon
will put a demo in no prob. Bluearc and Netapp it might take a bunch of 
meetings and moving it up the chain of command..

I would love to hear feedback on all storage out there as well. Some people
don't have the time to demo or just don't want to deal with it.  We use to
have lots of time to test and play with storage :) Thought might have 
some time coming up if this Strike Continues. Grrrrrrr


-S

   From: Saker Klippsten <saker@ZOICSTUDIOS.COM>
Subject: Re: Looking For Any Feedback on Top Tier Storage Products
   Date: Mon, 19 Nov 2007 19:16:28 -0500
Msg# 1646
View Complete Thread (6 articles) | All Threads
Last Next
Forgot to mention about Isilon Support.
There are different Levels of support.  Don't bother with the Onsite within
4 hours...crap    That's what we had before and never needed it.  Next day
should be find and save you some cash too.
They monitor our clusters 24-7 and we get e-malls and pages as well.  Most
of our issues get resolved swiftly.  There have been 1 or 2 that have lagged
on because it was hard to replicate the issue on their side,
Or capture when it was happening on our side. Turns out it was related to a
CIFS bug.. That said we are all NFS now even on windows and the cluster 
performs faster. Cifs has lots of CPU overhead cause locking issues.
That I am happy to say no longer having to deal with.

-S

   From: Jeff Yana <jyana@(email surpressed)>
Subject: Re: Looking For Any Feedback on Top Tier Storage Products
   Date: Wed, 21 Nov 2007 04:23:56 -0500
Msg# 1649
View Complete Thread (6 articles) | All Threads
Last Next
> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.


Hi Saker-

Thanks for the lengthy reply. Yeah, I heard that earlier versions of the
Extreme OS were ³buggy², but I thought the problems were confined to their
implementation of BGP. Doesn¹t sound like that applies to you. Maybe these
issues have been fixed with their move over to Linux. Hopefully, for their
sake, and their customers, they have. All in all their products are quite
reputable, but Force 10 is definitely coming on strong these days.

I agree with you regarding Force 10. I am currently learning toward the
S50¹s. It is simply a great stackable. It¹s more expensive than the Extreme,
but cheaper than Cisco (assuming you have negotiated deep discounts off the
street price). I likely will be ordering 5 of them to get started, though
final pricing isn¹t locked in quite yet.

It sounds like you are very pretty happy with Isilon stuff. I am currently
doing an eval of it on-site at a client¹s using four (4) of the 3000 nodes +
one (1) accelerator. It¹s holding up quite well, but then again they have
far fewer nodes than does Zoic. I am mildly disappointed that the total
throughput of a single cluster node can¹t seem to get beyond megs a second
(sustained). It can peak there with little effort but sustained throughput
tends to hover around 35 megs a second. Having said that, I like the
clustered file-system approach. It¹s ability to intelligently load-balance
network and disk IO is a huge selling point. I think that the move to SAS
drives will help future performance. I am also curious if their decision to
use TCP/IP for the data channel (over Infiniband) plays a role. I was a
little shocked to hear that they opted for it as their high speed transport.
Sure it keeps costs down, but what about performance? I am wondering will
they ever come close to saturating the 10GigE front-end they are testing
now. It surely will be interesting to see.

Yeah, I know what you mean about SATA drives. MTBF ratings are a ruse if you
ask me. Spindle speed and count matter the most, AFIK. I believe the latest
gen of SATA drives offer command queuing, once the single greatest advantage
of SCSI.  Sure the SCSI interface and protocol is higher bandwidth and more
robust, but with enough spindles, you can close the gap, and add capacity.

One final gripe with Isilion is that it uses Samba. I expected Isilon to
have their own CIFS software stack for this caliber of product. This is not
a deal-breaker, and as good as it is, it cannot touch the proprietary
products offered by NetApp and others. Let¹s face it, while powerful, Samba
is also buggy, but what do you expect, it¹s open soure.

All in all, I would agree with you, ease of management is a good selling
point. I would like to see more powerful reporting tools, however, and more
flexibility with NFS export creation. I hope to test the snapshots feature
this week, and maybe a few other features....

I hear what you say about doing the evals. They are time-consuming and
require a lot of time to plan and implement. If you have large data sets to
move, it is even a bigger pain in the arse. But these tests are necessary
and I cannot understand those that do not bother with them. Next week, after
the holiday, I will start round two, with a Blue Arc eval. As you said, it
did not take much arm-twisting, they are all eager to move product these
days.

I am curious to know if you use Aspera as an alternative to a hardware
WAN-accelerator? I would like to hear more about this as I am looking at
various hardware-based WAN accelerator products at this time for multi-site
replication.

I will keep you posted.

Regards.

Jeff






On 11/19/07 4:05 PM, in article 1645-rush decimal general at seriss decimal com, "Saker
Klippsten" <saker@ZOICSTUDIOS.COM> wrote:

> Hey Jeff
> 
> Switch wise: 
> 
>  I don¹t think anything can beat a force10 switch.  I don¹t have to think
> twice about recommending them at all.
> Their support is top notch.
> Its got a cisco like syntax and config so if you are famil with cisco this
> should be easy to learn.  We have a E1200
> ( Http://www.force10networks.com ) at the heart of our datacenter everything
> plugs into this or the S50¹s via 10GiGE.
> 
> We had Extreme Switches and I cant say one good thing about them. Their
> products suck and their Support service sucks.
> The technology used to power them is very very outdated. Its more software
> based and that¹s the primary reason why it locks up
> And cant handle the ³extreme bandwidth² requirements of our field.  The
> force10 has not had 1 hiccup since we launched it. They are rock solid.
> The E1200 has a 5TB Back Plane.
> 
> Storage wise:
> 
> We have over a 100TB¹s of Isilon. Been down the san route and I am not looking
> back. 
> 
> 
> Our Setup.. 
> Over 1000 Procs and 160 or so workstations.
> 
> Isilon has been solid for us. Though like anything  its not without its
> hiccups we paid that price 5 years ago when we helped alpha the product but
> today its running great.
> 
> -Ease of use and management of the cluster I don¹t think there is a contender
> out there. 
> -Snapshots are nice easy to manage.
> -Quota systems, Hard, Soft and Alert.
> -Replication software is fast! ( SyncIQ )
> -Aspera will run native on the Isilons we use this to manage and replicate our
> data to Vancouver disaster recovery cluster in case this log cabin burns down
> :)  Its a software wan accelerator ( really amazing ) (It will increase your
> wan link speed 100x )
> -Adding storage is like popping in a removable drive. Connect a few cables and
> power on.  Having redundancy is what I like the most.
> 
> While I am sure the single Stream performance is not as good as the newer
> bluearcs  (though I am sure its right behind it ) You do get redundancy with
> your data but also the systems serving the data.  You have the options to
> independently grow your performance over the size of your cluster by adding
> Accelerator nodes or just storage or a combo of both.
> 
> -Isilon can monitor your cluster remotely if you opt too and alert you of
> anything they think might fail like a disk that might start to fail they can
> predict this and soft fail ahead of time.
> 
> -I know everyone bags on SATA drives but I have had more Fiber drives fail in
> one year on my DDN and Flame Arrays than on our 40 Node Isilon Cluster. Which
> has about 480 drives in it : 4 failed in the last year.
> 
> -As you know it uses infiniband as its backend communication protocol and
> infiniband is slowly making its way into many other products we use here on
> the highend compositing side of things. While I do not know for sure it would
> seem likely that isilon could enable front end use of these infiniband ports
> for direct access or plug into another infiniband switch to enable highspeed
> access to the cluster.  I know they have 10gige support or will be soon in the
> form of an accelerator node or of the like. :)
> This would put off having to utilize the gige ports on each node  and just
> have 2 10gige fiber ports used to feed the network switch.
> 
> I am partial to Isilon just how some might be partial to Netapp or Bluearc.
> But I am a sucker for ease of use and management functionality ohh and the
> Blue Lights. 
> 
> Price wise in the past Isilon has done better against the rest thought I know
> you might be able to get a good deal with Bluearc right now as they are
> competing very aggressively for market share.
> 
> What it all comes down to is the environment get an Isilon in and test it out,
> Get a bluearc or a netapp and test it. I know its tuff sometimes to get demos
> in but these days they are all eager to get another sale in. Isilon will put a
> demo in no prob. Bluearc and Netapp it might take a bunch of meetings and
> moving it up the chain of command..
> 
> I would love to hear feedback on all storage out there as well. Some people
> don¹t have the time to demo or just don¹t want to deal with it