haserres.blogg.se

Emcopy documentation
Emcopy documentation













emcopy documentation
  1. EMCOPY DOCUMENTATION HOW TO
  2. EMCOPY DOCUMENTATION FULL

In this scenario i wanted to test and document performance of using rsync on a host that is acting as a “proxy” server.

EMCOPY DOCUMENTATION FULL

I started the job around 11:30pm and it completed at 2:30am, so 3 hours for full copy. This is what i expected to see, SyncIQ was using 10G network interfaces, quick look at CPU utilization displayed the same utilization as before, very CPU intensive process. I started the job and let it run for 15 minutes, this is what i saw in InsightIQ this time

emcopy documentation

Here is my SyncIQ job that uses local SmartConnect zone name Before i ran this test i went ahead and deleted the old policy and deleted the data from /ifs/data/qa directory using “treedelete” command, see bottom of this post for instructions. In this test i wanted to see if performance would be any different if i were to use SmartConnect zone name of my cluster that utilizes 10G NICs. Scenario 2 – Using SyncIQ with SmartConnect zone name I started the job at 3:15pm and it completed at 6:30pm for a total of 3:15 minutes, not bad at all for full copy. Even though SyncIQ settings were set at defaults (workers, file operations rules) look what it did to my cluster CPU utilization, pretty big spike. I was also happy to see that workload was distributed among 6 nodes of the cluster. Very interesting, SyncIQ decided to use 1G interfaces. I let the job run for about 15 minutes and this is what i saw in InsightIQ (performance reporting section) I went ahead and started the job, but i was really curious what interface it was going to use to copy the data. Target content aware initial sync (diff_sync): no I created my SyncIQ job and specified 127.0.0.1 as Target cluster IP address, here is the policy details: Scenario 1 – Using SyncIQ with loopback address of 127.0.0.1 This directory contains data from a learning system so a lot of tiny little files. Here is information from InsightIQ on the directory that i am using in all 3 scenarios. I decided to test a couple of different scenarios to see which one would give me the best performance. Isilon Cluster – 6 x 108NL nodes, each node has 2 x10G NICs and 2 x 1G NICs (LACP) After looking around Isilon google groups i found my solution. I also was very positive that using internal tools would be much more efficient and faster. Up to this point we had to rely on host based tools such as emcopy or rsync, not very convenient considering the fact that you have to have a “proxy” server available to perform these copies.

EMCOPY DOCUMENTATION HOW TO

Back in the Celerra/VNX days we used to use nas_copy that would allow us to perform file system to file system copies but since we migrated to Isilon we were trying to figure out how to accomplish the same thing using Isilon utilities. From time to time we receive requests from application people to clone production environment to either qa or dev instances in the same Isilon cluster. Our company migrated to Isilon from Celerra about two years ago.















Emcopy documentation