First of all, thanks for reading this. Your input is invaluable and much appreciated. Ill keep this as simple and concise as possible.
[b:1rdo4rit]Here is the basis for my query:[/b:1rdo4rit]
– Focus on distributed applications. Things like light/heavy weight agents that do things based on system activity or server triggers or just a server daemon/service.
– Client/agent or server run on a customers machine
– Client/agent or server must reside politely on a customers machine. By politely I mean low memory, CPU and bandwidth utilization. E.g. negligible or consistent expected impact on runtime performance of the machine
[b:1rdo4rit]Here is a common pattern found when focusing on automated testing with the above focus:[/b:1rdo4rit]
– Functional automated tests run on a client/server for every regression run.
– A code change has caused a performance hit, but automated tests still pass.
[b:1rdo4rit]What is needed:[/b:1rdo4rit]
– A way to correlate and easily visualize results alongside system utilization metrics during the test period.
– A way to fail a test if some system utilization metric is above some predetermined limit.
[b:1rdo4rit]As an engineer I want to have a test suite run overnight, every night, and see what my changes have done to the product from both a functional and performance aspect.[/b:1rdo4rit]
Is there something already out there?
Have you had a similar need? What was your solution if any?
Should I build a custom solution and open source it? Could others use something like this?
I have seen systems that do this to some extent with no usable UI or simple test interface, so you need expert users to look at results to determine the result. When nothing like this existed developers constantly asked for it.
You may say, “Why not monitor the client application resource usage itself”. This works fine for simple/small applications, although when you have multiple components and/or touch the kernel too…no dice.