If I’m getting 4K content I want that to be a 4K scan of original 35mm (or better) film. I’m not paying for an AI upscale that I can probably do myself for similar quality. If there is no 35mm source (such as it originating on 16mm or electronic television cameras) just give it to me at the best original resolution; 2K/HD or even SD are perfectly fine if that’s the original version.
I’m fine with the HD/4K conversions some older shows like Seinfeld or Friends got because those were originally shot on film (thank you Lucille Ball) so there’s an original source that’s relatively easy to go back to and just apply the edits, although sometimes I would prefer if they kept the original 4:3 aspect ratio instead of changing to 16:9. Sometimes the framing is a little off or they lose a subtle joke.
The one area I’d be okay with a little AI upscale (if it’s done well) mixed in is if there are effects shots mixed in that were only ever intended for SD viewing. I first saw this with Family Matters of all shows, showing it to my kid. They’ve scanned the film up to HD but some of Steve’s “experiments” look pretty jarring mixed in.
This is exactly how I feel. I want the highest possible original quality, without upscaling. There are a ton of 1080p Blu-rays out there that probably won’t get real 4K scans because of lack of demand or being niche content, and that’s totally fine. Just leave it at 1080p, which is still pretty solid for movies made more than 5 or 10 years ago. But if the movie was shot on film, there’s really no excuse for not rescanning it at 4K imo. I’m sure studios/distributors would continue to cry poverty, but screw that. I’m even okay with them leaving any original special effects untouched, especially if redoing them is what hikes up the cost. It can be distracting, but I care more about live action quality.
I go back and forth on animation, it’s the one time I feel kind of okay with upscaling. But I recently compared an upscaled 4K Blu-ray anime with a subsequently released native 4K scan Blu-ray, and the increase in quality was quite obvious.
I’m sure you know this, but as an addendum, the majority of films shot digitally have been filmed with 2K/1080p cameras. Obviously there are exceptions and also modern, effects-heavy films might have their compositing done in 4K, so I’m not saying there’s no benefit to going over 2K. However, in many cases you’re already getting the intended picture at 2K and upscaling is not bringing you any closer to the filmmakers’ ideal presentation.
I suspect if something was shot on film for distribution in theaters any special effects will look just fine in the scan. If it was shot on film for SD TV the effects probably won’t hold up, but hopefully it’s nothing critical enough to be a big deal!
Why would you need to upscale claymation lol
Hey, we invested so much money in these new AI-powered computers, and we gotta use them for something.
It’s 100% this, imho.
A lot of dumb-asses have latched onto the idea that AI can allow them to do skilled work without learning how to do it.
And they don’t want to hear people pointing out how it’s imperfect.
haha, what a bunch of reactionaries. I love how they point to a completely unreadable blur on the original, and go “Seee?! You can’t read that in the upscaled version!!” – as if you could in the original.
The upscales look great with the exception of some fine detail of the shadows on the back. Especially in cases where they don’t have the original content, I think it’s a perfectly fine way to go about things.
When you’ve gotta zoom in 100x magnification just to find something to complain about, I’d say that’s pretty damn good. Though I do wonder if “progressive” upscaling would be able to retain more fine detail (upscale from 360p to 480p, from 480p to 720p, 720p to 1080p, in discreet steps) – or if it would just fuck the image up further.
The point is it’s fake 4k, and in a real remaster of the film stock, the example of the text would be more readable. I understand that the source material isn’t available for all of the films.
I think it’s totally fair to either ask for AI upscaling to stop happening on purchasable media, or to ask for it to be clearly labeled as such.
I think it’s totally fair to either ask for AI upscaling to stop happening on purchasable media, or to ask for it to be clearly labeled as such.
Why? People keep buying it. The method of upscaling doesn’t matter to most people - only enthusiasts. It’s allowing a lot more work to be revived from the dead with it as well - overall I think the benefits far outweigh the problems.
For those who care. Just like how audio formats are labeled, and multiple options are provided. Even though most people don’t care.
Just label it “AI enhanced” or “AI upscaled to 4k”. It’s not too difficult, and it could be a tiny little box next to the other technical details people ignore.
There would be far less of the “reactionary” reaction if people were not “discovering” the situation after release, and potentially after preordering.
Edit: also, to be clear, I don’t think AI upscaling is bad in all cases. I watch Deep Space Nine AI upscaled. And that’s another example of source material not being available.
Who is buying media nowadays besides enthusiasts
Plenty of collectors are not enthusiasts.
could you define reactionary for me?
re·ac·tion·ary rē-ˈak-shə-ˌner-ē
: of, relating to, or favoring old-fashioned political or social ideas
AI BAD is a reactionary take. It’s a conservative social idea.
Is there a criticism of AI you wouldn’t categorize as reactionary?
Looking at the output quality and critically assessing it to be bad is not reactionary.
Saying the output is bad for no other reason that it’s generated by AI is reactionary.
Criticism of AI taking jobs is a fair take. Even if AI only makes the workflow more efficient, that’s less people that need to be hired in the industry. It’s fair in realizing that AI is valuable, and is doing amazing things, while also criticizing the downsides of using it.
AI is in its infancy. Different techniques are only going to make it better and better at what it does - it will end up taking jobs. But just broadly claiming it’s bad because you can’t read words that already weren’t readable in the source material is silly.
The reactionary take is “AI is useless, nobody should be using it” or “They should label everything that uses AI so I can avoid it!”…those takes are completely reactionary. The takes complaining about the ‘quality’ of the AI upscale, even though it looks FANTASTIC are reactionary. An upscale wouldn’t be available at all if AI hadn’t been produced to do it. So it’s clearly worth using. It’s produced a result that I’m sure MANY are happy with. There’s only a small handful of extraordinarily LOUD individuals making a fuss over it.
I think the biggest criticism of AI is the one that almost NOBODY actually ever complains about: Jobs. Wages are already lower than they have been in 20 years, and NOW it takes EVEN less people to do the same job? Wealth inequality is only going to be exacerbated by AI.
Edit: Also wasn’t me who downvoted your question asking me to define reactionary – so I upvoted to try and counter it, it’s a fair question to ask I think.
I think labeling things made by AI is a reasonable request. In this specific example, someone who’s buying 4K Wallace & Gromit is doing so out of a love of claymation and Aardman’s work in it. They want it in high definition specifically to see the details that went into a handcrafted set and characters. Getting a smoothed over statistical average, when you payed for it expecting the highest quality archive on an artistic work, would be more frustrating than just seeing it in lower definition.
More generally, don’t people working with these models also want AI output to be properly labeled? As I understand it, the model starts to degrade when its output is fed back into itself. With the rapid proliferation of AI posting, I’ve heard you can’t even make large language models with the same level of quality as you could before it was released to the general public.
I’m also kinda skeptical that this stuff has as many applications as are being touted. Like, I’ve seen some interesting stuff for folding proteins or doing statistical analysis of particle physics, but outside highly technical applications… kinda seems like a solution in search of a problem. It’s something investors really really like for their stock evaluations, but I just don’t see it doing much for actual workers. At most, maybe it eliminates some middle-management email jobs.
I’m also kinda skeptical that this stuff has as many applications as are being touted.
These models can do a LOT of different things. If you don’t see that, that’s an education problem, not an AI problem.
And combining these capabilities in new and unique ways are only going to make things even more wild. It won’t be very long at all before my “Ummmmm, I’ll have aaaaaaaaa” order at McDonalds doesn’t need to be taken by a human being and there’s just a single dude in the back running the whole place. That’s disruptive on an economic level never before seen. THAT is why companies the world over are so heavily invested in AI. It’s finally reached a threshold to replace real labor - and labor accounts for one of the largest portions of expenditure for companies. The economics of paying for electricity to run this stuff FAR outweighs what it takes to pay a person for the same output.
McDonalds canned their automated ordering experiment, and that was across 100 stores and lasted several years.
I am not convinced this replaces labor. Like any advancement in hardware or software, it can expand the efficiency of labor. But you still need people to do work. People who own things for a living would really really like that not to be the case - their interest in this is not rational decision-making, but deluded optimism.