bergman on 3 Jan 2014 09:37:49 -0800

 Re: [PLUG] calculating cost of a process

In the message dated: Fri, 03 Jan 2014 11:49:08 -0500,
The pithy ruminations from Daniel Aharon on
<[PLUG] calculating cost of a process> were:
=> So I'm tasked with determining the monetary cost of each execution run
=> of a process on a Win2008R2 box, so management can go to investors and
=> say "Our method costs XX cents per file".

What's the goal here? For example, does management want to present:

a low number (even if this actually represents a loss, ie.
to make it appear as if this method is very efficient in
order to gain more business that would include charges not
directly related to the cost of executing each process)

an accurate number

a high number (for example, to secure more direct funding from
investors to buy a faster server)

Each answer requires different factors in the equation.

Try to get information about the purpose of the answer and the intended
audience...there's a very good chance that management doesn't really want
a technically accurate number to present to investors, but a figure that
considers the actual costs as some part of the answer.

=>
=> My guess at one way to do this is:
=>
=> (cost of server)+(fractional cost of colo resources)
=> 	time the process takes
=>

Don't forget the fractional cost of software licenses, backup media,
service contracts, etc. for the server, and charges for persistent storage
(disks/tape that grow over time) and colo charges for data transfers
(if that's a significant part of the method)....and then there's the
biggest cost, as shown below:

(cost of server)+(fractional cost of colo resources) + D+M
time the process takes

Where "D" represents the personnel cost of developing the process (if any)
plus the personnel cost of managing the server ("M"). These will likely
be many times greater than the other factors. Your time (and development
costs) are likely much more expensive than the hardware & colo charges.

You might want to arrive at \$answer another way.

Either you or management determine the required income needed to meet these
costs for the usable lifetime of the server:

(cost of server hardware)+(fractional cost of colo resources)+
(hardware & software maintenance contracts for the server)+
(data transfer costs)+
(cost of persistent data storage)+
(licensing)+
(consumables)+
(personnel costs)+
(profit)

Then, based on usage, determine the cumulative execution time for the
income-generating processes run on the server.  Finally, do some simple
math to arrive at a completely arbitrary charge-back rate of dollars
per CPU-minute to fund the server over it's expected life.

Or, just skip all of that and find some CPU/minute costs for comparable
machines (Amazon cloud, Rackspace cloud, etc.) and base your costs on the
rates charged by you competition.

=> I know how I'd time a process on Linux (using "time"), but does anyone
=> have a suggestion for a Windows equivalent?