Hi,
we have to (…) migrate our file servers to Sharepoint (and what isn’t allowed there to a expensive new network file server).
Right now, it looks like we’re going to work like this (rough said):
– everybody (department) receives an 1TB external hard disk (for their share(s) on the network file servers);
– inventory using TreesizePro to determine long file/path names, too large files, non sharepoint compatible extensions, etc
– backup of share to external hard disk (entire + later changes)
– split backup into two twin folder structures with (a) Sharepoint compatible files (b) other.
(using a Java script using an extension list and max file size as criteria)
– upload (a) to sharepoint environment (b) to new file server. How: not clear. No test account yet…
Dealing with up to hundreds of GB and hundreds of thousands of files… depending of the share in question….
At first sight, this looks like it’ll roughly take several days.
Question: how can we speed this up as much as possible?
More specifically, I’m thinking about the split…
I understand that there are different factors:
(- script language: probably not much of a difference?)
(- script procedure: using the much faster ‘move’ command instead of ‘copy’; …)
(- port speed: no factor; determines the speed of the backup: http://www.pcmag.com/article2/0,2817,2358135,00.asp)
– speed of your hardware itself…
Would it make a hughe difference to perform this on a SSD vs a ‘regular’ external hard drive?
Or… as SSD is still so expensive… any other suggestions?
Maybe it’s a stupid question… but anything that can speed up the process would definitely help :)…
ps Because of the split operation, I’ll probably work with a double backup… so when something goes wrong, you still have a complete & original backup.