So, I'm going through the Уоutube research paper - overall, I'm not sure it accounts fully for army streaming behaviour. Moreover, the conclusions are probably significantly old, the paper being published in 2016. I'll still show some of the conclusions that I think still apply
This is the title of the paper. I'm not sure I want to link it here in order to not mess up with paper mentions and citations, but the link/PDF is available if you DM me.
1. Platform:
This is a plot of R_FN, which for our purposes is the proportion of views from the research botnet that got counted in a simple experiment. Уоutubе appears to be orders of magnitude more strict than other platforms.
Takeaway: assuming the same monetization, watching embeds on Weverse might be more beneficial, regardless of ARMY methods of streaming.
2. False positives
It appears that recruiting humans to simulate simple streaming behaviour (across sets of different IPs) produces up to 10% filtering of 'real' views.
Takeaway: Regardless of how well we tailor our streaming, we should expect to have views filtered. I have a hunch that the rate of false positives increases much more with number of views (i.e. a big MV premiere)
3. Impact on monetization/adviews
It appears that, at least back in 2016, more views were reported to the ad system than to the public view counter. Possibly different systems? I'd love to re-do this study under the new system.
4. Single-user case
When a single user generates views from the same IP, it appears that the algorithm /used to/ trigger past 8 views per day, aggressively filtering afterwards. Also, the system knows your viewing history, since aggressive filtering persists for 48 hours.
cont.
It also appears that thresholds increase slightly if you spread views across multiple videos, but still begins to filter post 20 views. This is under a uniform view distribution - not true for ARMY streaming.
My personal conclusion: I think that ARMY's strategy of <title MV> - <old MV> - <old MV> - <title MV> could potentially rack up enough views to start getting filtered. If we're guessing from Fig 6 (not guaranteed to be true), after 7 or so loops, 50% of views will be filtered.
5. Effect of size of ARMY:
While the paper seems to suggest that having multiple IPs running the same protocol in parallel does not impact filtering thresholds, the botnets tested are very very small. Especially a more modern algorithm would probably (+)
cluster very similar view requests and analyze them together - so if many ARMY have the same sequence of watched videos, they might get filtered. This is purely my personal conjecture, going against what the paper says.
6. Public WiFi
It appears that algorithms used to be more lenient towards public WiFi networks (i.e. campus/corporate) during times of high traffic, but not during low traffic. Intuition: many computers behind the same public IP (**), can't tell if one or many users streaming.
(**): public IP - this is the signature of your router, in the most basic terms. Different devices on the same network report the same public IP. This might in some cases even be the same for all routers of a specific ISP.
Your phone's IP changes when connecting WiFi-> data.
7. Parameters you can affect
Behaviour which randomises these parameters, even from a single IP, gets filtered much less. This means that - changing your browser/watching from your phone/embedded videos indeed increases the chance of your view counting.
Another thing I am cautious of in this paper is that the default settings are Linux/Firefox, which is a relatively rare signature, compared to Windows/Chrome and MacOS/Safari or mobile views.
8. Gооgle accounts?
This paper does not consider being logged in or not (did this functionality exist in 2016? too lazy to check), but it's most reasonable to assume that currently, the pair (public IP/Gооgle account) is the most narrow way to track a certain user.
Therefore, make sure that you're conforming to all view caps per Gооgle acc, not just IP. Alternating too many accs might also trigger filters.
9. Private browsing/cookies
The paper indicates that at least in 2016, having cookies turned off did not influence a view (under the same circumstances otherwise) being counted. Private browsing/aggressive privacy add-ons might still influence things (by preventing tracing).
Overall: The paper is interesing, but not at all conclusive for how we stream (also again, outdated). It helps me make my own personal conclusions about streaming, which I'll post below.
TLDR: Streaming harder might result in fewer views. Viewing like a "local" - i.e. MV, go do something else for a couple of hours, watch a few random videos, MV, looped a maximum of 4-5 times might be best. Unique IPs/accounts/devices is still your best shot (use your irls!).
The paper has made me doubt our usual streaming strategy, but in the lack of concrete data, I would personally defer to the empirical experience of trusted fanbases. Running a true controlled study would probably be infeasible and/or unethical, sadly.
Many thanks to @SusanP67361229 who initially found the paper.
I have been advised to use ice cream:
You love the ice cream at XYZ. You go and buy one scoop. Then you come back for another, and another... After 8 scoops the store bans you and takes back the 🍦🍦. You can try going to many stores, but after a while, the whole chain bans you.
You can follow @starkindler1.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.