Father of two, husband, tv show lover.
Working on Enterprise Mobility | #iOS, #macOS and #Android | #MEM, #Intune
| Github | https://github.com/cschildhorn |
| Website | https://www.applefreakz.de |
Father of two, husband, tv show lover.
Working on Enterprise Mobility | #iOS, #macOS and #Android | #MEM, #Intune
| Github | https://github.com/cschildhorn |
| Website | https://www.applefreakz.de |
DELL: Putting the IT in SHIT
I recently discovered Homerow, and let me tell you, this has almost instantly become one of my most-used Mac utilities. I've always been a heavy keyboard user, and hate having to click around in GUI interfaces to do things. This lets me do 90% of user interface interactions with the keyboard. Just a massive speed gain for me.
There’s an awesome new tool in the journey to replace passwords: Automatic passkey upgrades.
For a short window after a user signs in using Password AutoFill, apps and websites can “conditionally” request passkey creation for that same account. The Passwords app then creates a new passkey and notifies the user. No upsells or speed bumps.
All credential managers can support this! (There’s lots of new API for credential managers this year!)
More information (WWDC video): https://developer.apple.com/wwdc24/10125
Interesting paper from Apple on training an LLM to understand iOS UI:
Recent advancements in multimodal large language models (MLLMs) have been noteworthy, yet, these general-domain MLLMs often fall short in their ability to comprehend and interact effectively with user interface (UI) screens. In this paper, we present Ferret-UI, a new MLLM tailored for enhanced understanding of mobile UI screens, equipped with referring, grounding, and reasoning capabilities. Given that UI screens typically exhibit a more elongated aspect ratio and contain smaller objects of interest (e.g., icons, texts) than natural images, we incorporate "any resolution" on top of Ferret to magnify details and leverage enhanced visual features. Specifically, each screen is divided into 2 sub-images based on the original aspect ratio (i.e., horizontal division for portrait screens and vertical division for landscape screens). Both sub-images are encoded separately before being sent to LLMs. We meticulously gather training samples from an extensive range of elementary UI tasks, such as icon recognition, find text, and widget listing. These samples are formatted for instruction-following with region annotations to facilitate precise referring and grounding. To augment the model's reasoning ability, we further compile a dataset for advanced tasks, including detailed description, perception/interaction conversations, and function inference. After training on the curated datasets, Ferret-UI exhibits outstanding comprehension of UI screens and the capability to execute open-ended instructions. For model evaluation, we establish a comprehensive benchmark encompassing all the aforementioned tasks. Ferret-UI excels not only beyond most open-source UI MLLMs, but also surpasses GPT-4V on all the elementary UI tasks.
One of the best Cybersecurity memes I've ever watched. 🤣
Sharing a #MacAdmins utility script I wrote for grabbing app icons and converting to png
First peek at my task “to-did” app, Done. What do you think? (Of the app, not my video skills 😂 🙈)