Did take away the autoplay off-switch, or is it just me?

· · Web · 2 · 1 · 4

@woozle Last time I thought that it turned out they had placed some temporary info overlay right over the autoplay toggle.

@galaxis Found it -- you have to go to a video that's *not* in a playlist.

That said, they seem to be playing a lot more fast & loose about playing stuff without asking, these days.

Ahh well, that leaves all the more room for alternatives to take root.

@woozle #YouTube's been fucking with their player(s) significantly lately, and in some exceedingly annoying ways.

I've taken to posting #invidious links pretty nearly exclusively on account of this, simply because the player(s) are better.

Since multimedia-in-browser is #AnnoyingAsFuck anyway, I prefer dedicated media tools. #mpv is absolutely amazeballs, also mps-youtube and yt-download (all closely related).

mpv is both a local *and* remote multi-site player, throw URLs at it, #ItJustWorks

@dredmorbius re multimedia-in-a-browser: I do want a way for viewers to leave comments on (and otherwise interact with) videos I post, mind you...

I hadn't heard of Invidious; it looks both interesting and difficult to install.

The last time I tried to use youtube-dl, it was failing -- even after an update -- but I guess that just happens every now and then, and it takes some amount of time to update the code to get around YT's latest attempts to prevent downloading. (This is literally just a guess.)

@woozle Invidious is, at its simplest, just a website:

There is _also_ related software, a GitHub project:

... which I've not looked into.

Among other things, Invidious supports sourcing comments from Reddit (which I've not looked into). That has ... positives and negatives.

There's also PeerTube and a number of other video alternatives. I fully embrace encouraging these.

@dredmorbius Invidious looks a bit more interactive than FixYT, which is maybe good?

I've been interested in PeerTube since it came out, but it doesn't look easy to set up and I don't want to risk putting a lot of time into getting established on an existing instance without some kind of trust-relationship with the owner.

That kind of service (federated media) *does*, however, go into the bucket of "pay-what-you-can (to-fund-the-revolution)" services that I'd like to consider offering on a high-reliability basis, using infrastructure that @eryn is working on. (Nextcloud is going to be the trial balloon.)

@woozle I hear you WRT time and all that.

/me gazes with burning guilt over a very extensive, and dusty, to-do list.

Figuring out where you want to go, then how to get there, helps. Even just awareness is a start.


@dredmorbius @woozle a set of well-pruned functional requirement docs covers a multitude of bases, and the main limitation on my end of projects is time to put hands on code.

With regarts to (wrt) your needs, is there an actual list?
Do you want to collectively prioritize it?

I suspect many of us have fairly similar and reasonable shared infrastructure needs, functionally speaking.

We all need project management systems, secure remote backups, and reliable comms with good accessibility and security.

These are all needs that can be met on some truly inexpensive hardware, if my experiments and those of @kemonine are any indication.

Using low power tech to empower and connect the marginalized and underserved, can in theory be done in a way that gives us all some solid, highly available tools on a decentralized and sustainable platform.


I have more than a few mini-vps arm boxes deployed for various duties...

There's a reason so much is available at the lollipop cloud docker registry (see for a full list / browser of what's in the registry).

I run most/all of the images on that registry across a variety of domains on hardware that's no more powerful than a raspberry pi 4 with 4gb ram and 128gb sd card.

That said: 2 exceptions are the build boxes that churn out those docker images. The 2 build boxes are decidedly not small boxes. One is a 64 core arm beast with 128gb ram and 1tb of ssd. The other is a xeon workstation with 24gb ram and 2tb of spinning rust.

The hard part is working together through various competing tech. I like the work yunohost and 'from scratch' folks are doing but having the containers takes a LOT of pain out of the self hosting and maintenance problem. So much so I'm pretty much only deploying containers on simple OS bases these days.

@dredmorbius @woozle

@eryn @dredmorbius @kemonine

I haven't put together a list, but off the top of my head:
- file synch/sharing (NextCloud)
- social audio/video hosting
- email
- messaging (e.g. XMPP)
- social text of various kinds
- project management (features equivalent to Redmine + MediaWiki)

Also, being kinda dissatisfied with how most of these are implemented, I kinda want to rewrite a lot of it...

@eryn @dredmorbius @kemonine

<could ramble further about *non*-SaaS services I'd like to provide/enable as well>
(Overambitious? Me??)


Implementation / programming language choice / deployment paradigms / etc helped shove me fully into the container camp.

You don't need virtualization for containers and you can ignore all the assumptions upstream projects make about tech stacks and more.

Nevermind you don't get odd conflicts in core programming language dependencies, missing package manager packages, or various other forms of pollution like distro's having incredibly different beliefs about how to overhaul default configs and the like.

Containers also don't need hardware virtualization support so something as humble as a raspberry pi 3 can easily be turned into a powerhouse.

@eryn @dredmorbius

@kemonine @dredmorbius @eryn

Aren't there penalties to containerization, though? I don't actually know, but my impression is that it:

- adds some amount of extra resource load (RAM? CPU? don't know)

- reduces customizability

- hinders maintenance (in that you don't have direct access to the app, and have to go through the container's interface)

In general, isn't it also kinda putting more control in upstream hands?

@woozle @eryn @dredmorbius @kemonine

if you're familiar with FreeBSD jails or Solaris Zones that's basically what linux is badly mimicking.

containers are implemented using cheap kernel features, they aren't security boundaries in linux, but they usually make admin easier because you don't have to worry about services stepping on each other. sane container tools just give you a shell inside the container which pretty much acts like a lightweight vm. there is some overhead, but its negligible and you usually end up with better hardware utilization overall.
@woozle @dredmorbius @eryn @kemonine

something like docker has the additional overhead of a daemon and some other stuff per-container, but its only an issue if you are running a lot containers on the same host. in my experience it isn't the dominant consumer of memory.
@woozle @eryn @dredmorbius @kemonine

additionally, you should NOT use containers that you did not build yourself on a trusted machine. containers are great tools for administrating services, they are NOT a package format and SHOULD NOT be treated as such.

I'll second the low overhead of 'additional resources'. I've managed to get a dozen smashed together on a raspberry pi 3 with 2gb ram comfortably enough.

The bigger issue I've found is that if you smash up against something like nextcloud, matrix and a bunch of other cpu hungry services when used heavily... things fall over quickly. It's about load management and with more recent iterations of the underlying tools you can set cpu/ram limits as long as the core kernel supports such things (see cgroups, firejail and network namespaces for some fun if you're ever interested in bare metal rate limits that things like docker also leverage)

The real take away on the 'overhead' is 'don't be stupid' in a sense. You're going to lose little ram to the containers but the cpu/ram of what's inside the containers will be much more constraining overall.

re docker's daemon: it consumers hilariously little resources in my experience. I poked at k3s (a stripped down kubernetis aimed at arm boards and more) and... right now nobody has the toolchain docker provides that's both useful *and* low overhead. I've poked lxc and lxd with some success but the toolchain is much harder to work if you're not comfortable at a command line.

As for packaging point... I see containers as a faux packaging setup. Specifically in the context of something like kubernetis (don't do it on arm, it's not there yet!), Docker or something like lxd. You get a nice jails/chroot style deployment but it can be tuned accordingly with lesser storage overhead in some cases (NOT all).

being an 'old' in the tech world overall i've done a lot with virtualization (pre-hardware virt), jails style stuff (former freebsd user, user of chroots to fix borked machines) and containers I can say this much: containers on linux tend to be the least-worst for overhead/utility/separation of duties. they may not be as powerful as *bsd jails but they get the job done nicely and you can adapt things like portainer (web gui for docker) to enhance the non-technical user experience in a positive way. it's not perfect but fits neatly into the middle ground between rolling something from scratch and something like freedombox/yunohost.

I'll also agree you need to be wary of various containers on things like Docker hub. There are a lot of oddball containers for software that's not natively built as a container on some arches (/me waves to arm support that's there but not published in official channels). *this* was my biggest pain point with the raspberry pi's and other arm boards that have immense utility but are a PITA to get built out with useful self-hosted software. I can separate all the dependency BS via the containers but finding ready made, official containers has been a bit of a nightmare.

Attached is a screenshot of the build box I run for an OSS project. I've managed to build a TON of stuff as arm native but it required submitting patches to a bunch of upstream projects, sorting out some really strange shit node.js brings to the table on arm and more.

Nevermind vetting official sources or trusted sources of packaged containers that I could re-use without having to build things from scratch on my own.

Welcome to the shit show

@woozle @dredmorbius @eryn

@woozle On youtube-dl: yes, it does fail on YouTube periodically. Both the site *and* the software are updated frequently, and IME youtube-dl has *always* worked immediately following an update, though that might not necessarily be the case.

mpv allows direct streaming of video _or_ audio, at your option, without requiring the download step. Though of course, downloading content affords further possibilities (building a media library, offline viewing/listening).

Both work on many sites.

Sign in to participate in the conversation

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!