theruss

XHGui is what I last used, which from memory was a wrapper project around MongoDB (for storing profiling data) and a nice-ish GUI on top of XHProf itself

theruss

https://github.com/perftools/xhgui https://www.php.net/manual/en/book.xhprof.php https://github.com/phacility/xhprof

Show 2 attachment(s)
GitHub  
perftools/xhgui

A graphical interface for XHProf data built on MongoDB - perftools/xhgui

GitHub  
phacility/xhprof

XHProf is a function-level hierarchical profiler for PHP and has a simple HTML based user interface. - phacility/xhprof

Hide attachment content
adrianstein

@Aaron Cooper @null the main table is SiteTree_versions, Page_versions and Product_versions. I have a shop site and there is like 30k products (so pages) and every night they update pricing or a name or a description which writes a new record for almost every page. They also have a fairly big mega menu with lots of pages listed which im sure slows it down, I was wondering if a combo of removing old records and making the menu static would help. Or if there is something else? @theruss Do you have any links to XHProf that I can look into?

Aaron Cooper

E.G. Vanilla SS3.6 on PHP 5.6 requests per second vs SS3.6 on PHP 7.1 requests per second. The benchmark is the same application on different PHP versions. Not combos of SS and PHP.

Aaron Cooper

Same way all the others do. It's just a simple before and after focussing on PHP

theruss

I'd imagine that test would be skewed. Like how would you tell whether a perf improvement on SS4.4 on PHP7.3 over some other combo was down to SS itself or the PHP version?

Aaron Cooper

Anyone seen any benchmarking done on any version of SS on PHP5 vs PHP7? Found lots of benchmarks for Wordpress, Drupal, Joomla and other nightmares. But no SS.

theruss

Tools such as XHProf will yield fascinating results 🙂

theruss

If you're having performance issues, your #1 port of call should always be to gain evidence, which will then direct you to where problems actually exist.

null

don't think it's the size that matters. It's how it's accessed and how often. Maybe a good chunk of that DB is 8GB of address records that form a lookup table. With certain columns indexed the large dataset won't be a huge issue.

With SilverStripe, I've found the largest table is the *_versions tables, which can slow it down if you depend on versioning a lot