I’m a robotics researcher. My interests include cybersecurity, repeatable & reproducible research, as well as open source robotics and rust programing.
I hope compatibility with git submodules gets ironed out soon. I’d really like to have multiple branches of a superproject checked out at once to make it simpler to compare source trees and file structures.
I’m using a recent 42" LG OLED TV as a large affordable PC monitor in order to support 4K@120Hz+HDR@10bit, which is great for gaming or content creation that can appreciate the screen real estate. Anything in the proper PC Monitor market similarly sized or even slightly smaller costs way more per screen area and feature parity.
Unfortunately such TVs rarely include anything other than HDMI for digital video input, regardless of the growing trend connecting gaming PCs in the living room, like with fiber optic HDMI cables. I actually went with a GPU with more than one HDMI output so I could display to both TVs in the house simultaneously.
Also, having an API as well as a remote to control my monitor is kind of nice. Enough folks are using LG TVs as monitors for this midsize range that there even open source projects to entirely mimic conventional display behaviors:
I also kind of like using the TV as simple KVMs with less cables. For example with audio, I can independently control volume and mux output to either speakers or multiple Bluetooth devices from the TV, without having fiddle around with repairing Bluetooth peripherals to each PC or gaming console. That’s particularly nice when swapping from playing games on the PC to watching movies on a Chromecast with a friend over two pairs of headphones, while still keeping the house quite for the family. That kind of KVM functionality and connectivity is still kind of a premium feature for modest priced PC monitors. Of course others find their own use cases for hacking the TV remote APIs:
A while back, I tried looking into what it would take to modify Android to disable Bluetooth microphones for wireless headsets, allowing for call audio to be streamed via regular AAC or aptX, and for the call microphone to be captured from the phones internal mic. This would prevent the bit rate for call audio in microphone being effectively halved when using the ancient HFP/HSP Bluetooth codecs, instead allowing for the same call quality as when using a wired headset. This would help when multitasking with different audio sources, such as listening to music while hanging out on discord, without the music being distorted from the lower bit rate of HFP/HSP. This would also benefit regular VoLTE, as the regular call audio quality already exceeds that of legacy Bluetooth headset profiles.
Although, I didn’t manage to tease apart the mechanics of the audio policy configuration files used by the source Android project, given the sparse documentation and vague commit history.
I’d certainly be fine with the awkwardness of holding up and speaking to my phone as if it was in speaker mode, but listening to the call over wireless headphones, in order to improve or double the audio quality. Always wondered what these audio policies fall back to when a Bluetooth device doesn’t have a headset profile, but it’s almost impossible to find high quality consumer grade Bluetooth headphones without a microphone nowadays.
For the call setting under Bluetooth audio devices, I really wish they would break out or separate the settings for using the audio device as a source or sink for call audio. Sort of like how you can disable HSP/HSF Bluetooth profiles for audio devices in Linux or Windows.
Similarly reported (in more detail) by TechCrunch:
Is this like multi window support, or just floating panels within the VS code window’s canvas?
For dual screen setups, sometimes I end up opening two instances of VS code for the same workspace, which seems a bit overkill.
Speaking of accessibility improvements in VS Code, does anyone know of a good way to use text to speech? I use this Read aloud TTS extension for web browsing, but would like to find an equivalent method in VS Code that lets me use different TTS voice engines the same way to listen to long markdown files or inline documentation.
Ex. An NBA or Sports instance containing /c/NBA /c/NFL /c/NHL and all the related teams.
You can change the color theme from the setting page under the top right drop-down. But it would be nice to have something like Reddit Extension Suite for the default Lemmy UI front end for custom defined CSS.
I think once we get a few more third party clients to explore alternative UIs, folks should have more options for personal preference.
One thing I like about the current web UI already is the low noise in embedded text in the discussion threads. E.g. when I engage my screen reader, all I have to listen to when moving between comments is the post author and post date. Just enough context to understand the TTS engine moving between comments, unlike the old.reddit.com UI that include 5 or 6 different hyperlinked words (parent, context, permalink, etc) that the TTS has two repeat over and over again.
The hover text for icon links should be enough UI context for screen readers, although not all icon links on the current Lemmy UI seem to include hover text meta data, like the permalink chain icon 🔗, while the collapse minimize icon does.
Any particular reason that those OEMs made that decision when releasing those boxes? Was that range blacklisted in firmware because of the legacy specification? I thought the spec just forebode range’s public allocation, but not necessarily its internal use.