Drive Bender is designed to take a number of hard drives (or volumes), and present them as a single seamless volume to the system. This volume can be represented as a mounted drive letter, or a networked shared path. On the surface this may sound complex, and as such users may wonder how we (Division-M) deal with such complexity... well I'm hoping to answer some of these question here.
First of all lets go over the architecture, the following diagram details the key Drive Bender components and how they relate to each other.
The Drive Bender components (red) are made up of a file system driver and system service. This design allows Drive Bender to scale well without sacrificing performance (i.e. the driver does not need to know about the pool as such, resulting in less kernel mode processing which leads to a more responsive and reliable system).
One of the key aspects of this design is that Drive Bender performs no low level drive manipulation, all pooled drive access is done using the same Windows API's that were called by the top level application. The results of these Windows API calls is passed straight back up to the calling application, so in essences, Drive Bender is providing a conduit to a specific drive in the pool.
An example of a type I/O operation
An example is a file write, this involves opening a file (existing or not), then sending "packets" of data to that file.
The top level application will call the open file API, this call makes it's way to the Drive Bender service where the service will determine if the file exists or not. It does this by checking each drive in the pool, if the file is found, a handle for that file is returned (again using the very same API the calling applications used). If the file is not found, and the incoming call allows, a new file is created and a handle for that file is returned (it should be noted that Drive Bender uses some clever caching techniques to ensure that this file search is only performed once in a given period).
Now the top level application receives the results of the open file call, it then calls the write file API using the received handle. This call is then received by the Drive Bender service, and again using the same API as the calling application, writes the buffered data to the file.
Again the important thing to note here is that the Drive Bender service is using pretty much the same API and the calling application, the only difference is that it marshals which drive the calling application is talking too.