Message of the day:
Welcome. Latest release: https://www.silverstripe.org/download Community Forum: https://forum.silverstripe.org Features: https://forum.silverstripe.org/c/feature-ideas Archive: https://slackarchive.silverstripe.org
If you have any SilverStripe related questions, please supply the version of Framework you're using.
Did you flush? 🚽 =
Archive temporarily at https://archive.codingplayground.nl (redirect)
https://damienflament.github.io/phpunit-api-docs/5.7/ is great for the version Silverstripe has to use.
oh, for v9? maybe not.
the official docs don’t do a particularly good job of the mock object api
What version of PHPUnit?
Was there something you were trying to do that didn't make sense?
This blog is pretty good at explaining the relationship with stubs and injection (which Silverstripe uses)
PHP Unit introduction seriesHide attachment content
does anyonek now if there are phpunit API docs available anywhere
Pretty sure Heyday had a module for that too, although might be SS3 based and unmaintained.
It's been done before
that looks almost the same as what I was doing but it looks to work for 3, would have issues in 4
What files are you having issues with?
Depending on the system, you may need to take different approaches.
E.g. for Asset (/assets), I would recommend using a single datasource mounted to your multi-server architecture.
This way you don't have to worry about replication or differences between servers.
yeah there is one mounted but it’s replication is about a second or 2, leading to the issue.
@rudiger When you say replication, are you talking about delays in updates? Or are you syncing between 2 different file systems?
E.g. Do you A: (e.g. Network Attached Storage) Filesystem A -> Mount Server A Filesystem A -> Mount Server B
Filesystem A -> Mount Server A Filesystem B -> Mount Server B
network file store and two servers in separate data centres
The replication lag can be a few seconds
when you load the CMS it combines the required files
and has a ?m=123456546
m = timestamp of the request
Where does it put those combined files? (E.g. what directory?)
the assets/_combinedfiles directory
not sure the exact one atm but it lives in the assets folder
which I’ve got as a mounted NFS
Where is your Network storage mounted to?
assets/ or the whole codebase?)
whole codebase would create a bottleneck
Your unfortunately not going to fix this issue with code (as the delay is going to cause problems that code won't fix).
So focus should be on reducing the delay in your infrastructure
I’ve fixed in SS3
doing the hash rather than timestamp means that it’s unique per code (which only changes per deploy) rather than time.
Only way I can think of doing it, is to have Silverstripe "regenerate" the Combined assets if the "Hashed" value of the content does not match the Hashed value in the request.
But this would add additional stress onto the request workflow (generating assets) for when syncing is too slow
one thing you have to do is warm the cache
after each deploy
Do you not have any dynamic content in your "CombinedFiles" ? (E.g. Input fields in CMS that affect the content of these files)
it’s only a problem after a code deploy where the code changes
E.g. Fields that allow you to put custom JS into
Then pre-warming your CombinedFiles is probably the best way to resolve this.
You can use the Hash method as well (like with SS3), but this will add "regeneration overhead" to your request cycle.
By pre-warming, it will create the files during deployment, (rather than on-request) which means the 2second sync delay should not be an issue
I tried that in SS3 without the hash function and it still caused issues
That would only cause issues if the files were stored outside of your network storage (E.g. different on each server) OR they were requested before it had time to update on storage.
looking at your code for multi-sever we did the same thing and it fixes it without performance issues
There is one part confusing me. Under what condition would the timestamp change on the CombinedFiles?
E.g. Under what action would you get an issue?
Is it because when a user "Saves" changes to a CMS page, it forcefully re-generates these files (giving them a new timestamp)?
If so, you could potentially look at having the CMS Admin system only update these files if the content has changed (Comparing new file content with existing file content).
But unfortunately I am not as fluent in SS4 as SS3 currently (Working mostly on infrastructure now).
My recommendation would be to work on that 2second delay in your Network storage. Anything greater than 10ms is a pretty bad performance for NFS. As if you resolve this issue, then all your problems should be solved.
Also looks like the latest version of SS4 actually uses SHA1 hashes rather than Modified Date/Timestamps anyway.
from memory what happens is cms loads the html which requires js eg test.js?m=123
In that process it’s combining the file test.js, browser is now requesting test.js?m=123 from a different server, (this bit I’m sketchy on because it’s been 3 years) because this file doesn’t exist yet it returns html js include of test.js?m=124, it’s combining test.js and it just keeps bouncing from server to server
even if the NFS is a few ms delay you run the risk of having it increment the second counter, just looking at that link you sent
mmmm, I wonder if that fixes it anyway, guessing it’s a check for if the file exists before trying to create it
guess I won’t know until I test it behind a LB
m= is not a counter.
In SS3 - It tracked specifically the "modified date" of the file (E.g. last time it was modified).
In SS4 - It appears to track the Hash of the file content (SHA1).. So only time this would change is if the content of the file changes.
Are you having issues in SS4? Or have you yet to test it?
The "multi-server" module I created for SS3 was to change the m= functionality to "Hash" (like SS4) rather than modified date.
yeah the m in my example is just meant to represent a timestamp, couldn’t be bothered typing a correct one
SS4 has a lot of references to it hashing and the doco even talks about the config needed for a distributed environment but in the front end it still has ?m=timestamp
but it’s possible it’s fixed it on the backend rather than doing an md5/sha1 on the frontend
I’ll deploy it and see what happens, might revisit this if it continues to be a problem
What you mean by "Front end it still has ?m=timestamp".
Are you sure its not ?m=Hashstring
an example js path: resources/silverstripe/campaign-admin/client/dist/js/bundle.js?m=1584438524
That's not a CombinedFile?
That is a composer install asset from "admin" module
was just an example
Is there any other
Requirements_Backend in your codebase other than the default Framework one?
What framework version are you using?
sorry back. Not atm, I’m trying to get the one from SS3 working but theres still issues.
What happens if you remove the module completely?
it’s not on for what I’m working on atm, the m=1584423458 is standard behaviour
In framework 4.5.1 - it should use sha1 hash of files rather the timestamp.
it’s not being called when I’m loading things that didn’t work in SS3. I’ll have to do more testing on a LB environment to see whats going on.
thanks for your help though
Ahh I found out why.
combine for files you will get a Hash result.
Then you just need to tell it to load your Generator.
- class: SilverStripe\Control\MyResourceURLGenerator
- NonceStyle: mtime
But that is a bit "Hacky".
you would probably want to replace
setNonceStyle to allow for '
Then in the
urlForResource() add in a new
case statement for
switch ($this->nonceStyle) to handle
Then you could set
NonceStyle: sha1_file in your YML to make it cleaner
thanks a heap for that, going to give that a try.
👍 It would be even better if you create a PR back to framework to add this functionality in to the framework by default.
oh ok? I’ve worked with other frameworks that use the hash rather than the timestamp due to time being something thats not reliable to identify if a file has changed.
why do sha1 rather than md5?
I know for passwords but we’re just looking for something reliably unique given x content
there’s a higher chance of collision but the chances of 2 files having a collision given the large amount of content would be pretty low
sha1 is the faster method for hashing compared to md5
> MD5 is 7.6% slower than SHA-1 for short strings and 1.3% for longer strings.
Through PHP they seem about the same speed. You could add 2x
case options to your switch (1 for
sha1_file and 1 for
md5_file ) so that people could choose
well there you go, didn’t know that
thought it would be the opposite
reading a bit more into it might be a bit more cpu architecture. think more modern ones have improved sha1 performance which makes sense.
oh nice, thanks for that. I had got it all working a couple of hours ago but better as a core feature
Yeah, have a look at Brett his repo
hey all, when dealing with SS 3 on a distributed environment I had to write a plugin that basically created a hash of the file rather than using a timestamp because if a timestamp was used one server would say a slightly different time then the other which would cause an infinite loop of trying to load the assets. SS4 seems vastly different in the backend processing of those files so having some issues modifying it however in the docos is sort of mentioned with some config flags but they don’t do anything. Anyone have knowledge about this? Doco is at https://docs.silverstripe.org/en/4/developer_guides/templates/requirements/