Zuzu: JRuby 기반의 로컬 LLM 및 AI 네이티브 데스크톱 애플리케이션 프레임워크

JRuby와 llamafile을 활용하여 클라우드 의존성 없이 사용자 로컬 환경에서 실행되는 개인정보 보호 중심의 AI 데스...

#jruby
https://ruby-news.kr/articles/github-parolkar-zuzu-jruby-framework-for-ai-native-desktop-apps-local-llm-single-jar-dmg-exe-deb-distribution-claude-code-ready-scaffolding-sqlite-backed-agentfs-as-a-sandboxed-virtual-filesystem-for-the-agent-ships-as-a-jre-bundled-for-native-installers-dmg-deb-exe-no-cloud-required-no-extra-java-installation-needed-github

Zuzu: JRuby 기반의 로컬 LLM 및 AI 네이티브 데스크톱 애플리케이션 프레임워크

JRuby와 llamafile을 활용하여 클라우드 의존성 없이 사용자 로컬 환경에서 실행되는 개인정보 보호 중심의 AI 데스크톱 앱 개발을 지원합니다.

Ruby-News
Next on the list is a #JRuby example...
【初心者】JRubyによるJavaFX(FXML)アプリの基本 - Qiita

動機 今どき、JavaやRubyでGUIのデスクトップアプリを作りたいという人は少ないかもしれませんが、CUIコマンドの引数指定を忘れた時、GUI画面があったほうがいいと思うことがあります。(Tcl/Tkで作っていた人もいたかもしれません) Javaには、Swing、Ja...

Qiita

JRuby의 @Deprecated 어노테이션에 버전 정보를 추가하는 자동화 기법

Java 9에서 도입된 @Deprecated의 since 속성을 활용하여 JRuby 코드베이스 내 모든 지원 중단 API에 최초 도입 버전 정보를 명시적으로 추가했습니다.

#jruby
https://ruby-news.kr/articles/updating-deprecations-with-version

JRuby의 @Deprecated 어노테이션에 버전 정보를 추가하는 자동화 기법

Java 9에서 도입된 @Deprecated의 since 속성을 활용하여 JRuby 코드베이스 내 모든 지원 중단 API에 최초 도입 버전 정보를 명시적으로 추가했습니다.

Ruby-News || 루비 AI 뉴스

연속성 2026/05: 열정적인 노력

Hanami 생태계는 `cli`, `router`, `view`와 같은 핵심 컴포넌트에 `repo-sync` 및 `release-machine`을 확장하여 저장소 관리 및 배포 워크플로우를 중앙 집중화함으로써 유지보수 오버헤드를 크게 줄이고 있습니다.

#hanami #jruby #rubocop
https://ruby-news.kr/articles/continuations-202605-fit-of-passion

연속성 2026/05: 열정적인 노력

최신 Ruby, Rails, AI 관련 뉴스와 트렌드를 한곳에서 만나보세요

Ruby-News || 루비 AI 뉴스

🚀 Simplified JRuby Gradle plugin 2.3.2 has been released!

Documentation: https://jruby-gradle.ysb33r.org

Release notes: https://jruby-gradle.ysb33r.org/jruby-simple/2.3.2/changelog.html

#gradle #gradlePlugins #jruby

Redirect Notice

The Mocha test suite continues it's proud tradition of acting as an extra set of regression tests for JRuby - this time finding an obscure bug in keyword argument handling! 🎉

And, as always, I'm very appreciative of the work of @headius and the JRuby team! ❤️

https://github.com/jruby/jruby/issues/8976

#jruby #ruby #kwargs #testing

Ruby-Elf and collision detection improvements

While the main use of Ruby-Elf for me lately has been quite different – for instance with the advent of elfgrep or helping verifying LFS support – the original reason that brought me to write that parser was finding symbol collisions (that’s almost four years ago… wow!).

And symbol collisions are indeed still a problem, and as I wrote recently they don’t get very easy on the upstream developers’ eyes, as they are mostly an indication of possible aleatory problems in the future.

At any rate, the original script ran overnight, generated a huge amount of database, and then required more time to produce a readable output, all of which happened using an unbearable amount of RAM. Between the ability to run it on a much more powerful box, and the work done to refine it, it can currently scan Yamato’s host system in … 12 minutes.

The latest set of change that replaced the “one or two hours” execution time with the current “about ten minutes” (for the harvesting part, there are two more minutes required for the analysis) was part of my big rewrite of the script so that it used the same common class interfaces as the commands that are installed to be used with the gem as well. In this situation, albeit keeping the current single-threaded (more on that in a moment), each file analysed consists of three calls to the PostgreSQL backend, rather than being something in the ballpark of 5 plus one per symbol, and this makes it quite faster.

To achieve this I first of all limited the round-trips between Ruby and PostgreSQL when deciding whether a file (or a symbol) has been already added or not. In the previous iteration I was already optimising this a bit by using prepared statements (that seemed slightly faster than direct queries), but they didn’t allow me to embed the logic into them, so I had a number of select and insert statements depending on the results of those, which was bad not only because each selection would require converting data types twice (from PostgreSQL representation to C, then from that to Ruby), but also because it required to call into the database each time.

So I decided to bite the bullet and, even though I know it makes it a bunch of spaghetti code, I’ve moved part of the logic in PostgreSQL through stored procedures. Long live PL/SQL.

Also, to make it more solid in respect to parsing error on single object files, rather than queuing all the queries and then commit them in one big single transaction, I create single transactions to commit all the symbols of an object, as well as when creating the indexes. This allows me to skip over objects altogether if they are broken, without stopping the whole harvesting process.

Even after introducing the transaction on symbols harvesting, I found it much faster to run a single statement through PostgreSQL in a transaction, with all the symbols; since I cannot simply run a single INSERT INTO with multiple values (because I might hit an unique constrain, when the symbols are part of a “multiple implementations” object), at least I call the same stored procedure multiple times within the same statement. This had tremendous effect, even though the database is accessed through Unix sockets!

Since the harvest process now takes so little time to complete, compared to what it did before, I also dropped the split between harvest and analysis: analyse.rb is gone, merged into the harvest.rb script for which I have to write a man page, sooner or later, and get installed properly as an available tool rather than an external one.

Now, as I said before, this script is still single-threaded; on the other hand, all the other tools are “properly multithreaded”, in the sense that their code fires up a new Ruby thread per each file to analyse and the results are synchronised not to step on each other’s feet. You might know already that, at least for what concerns Ruby 1.8, threading is not really implemented and green threads are used instead, which means there is no real advantage in using them; that’s definitely true. On the other hand, on Ruby 1.9, even though the pure-Ruby nature of Ruby-Elf makes the GIL a main obstacle, threading would improve the situation by simply allowing threads to analyse more files while the pg backend gem would send the data over to PostgreSQL (which would probably also be helped by the “big” transactions sent right now). But what about the other tools that don’t use external extensions at all?

Well, threading elfgrep or cowstats is not really any advantage on the “usual” Ruby versions (MRI18 and 1.9), but it provides a huge advantage when running them with JRuby, as that implementation has real threads, it can scan multiple files at once (both when using asynchronous listing of input files with the standard input stream, and when providing all of them in one single sweep), and then only synchronise to output the results. This of course makes it a bit more tricky to be sure that everything is being executed properly, but in general makes the tools just the more sweet. Too bad that I can’t use JRuby right now for harvest.rb, as the pg gem I’m using is not available for JRuby, I’d have to rewrite the code to use JDBC instead.

Speaking about options passing, I’ve been removing some features I originally implemented; in the original implementation, the arguments parsing was asynchronous and incremental, without limits to recursion; this meant that you could provide a list of files preceded by the at-symbol as the standard input of the process, and each of that would be scanned for… the same content. This could have been bad already for the possible loops, but it also had a few more problems, among which there was the lack of a way to add a predefined list of targets if none was passed (which I needed for harvest.rb to behave more or less like before). I’ve since rewritten the targets’ parsing code to only work with a single-depth search, and relying on asynchronous arguments passing only through the standard input, which is only used when no arguments are given, either on command line or by default of the script. It’s also much faster this way.

For today I guess all these notes about Ruby-Elf would be enough; on the other hand, in the next days I hope to provide some more details about the information the script is providing me.. they aren’t exactly funny, and they aren’t exactly the kind of things you wanted to know about your system. But I guess this is a story for another day.

#Collisions #JRuby #Multithreading #PostgreSQL #Ruby #RubyELF
GitHub - Flameeyes/ruby-elf: Ruby-Elf is a pure-Ruby library for parse and fetch information about ELF format used by Linux, FreeBSD, Solaris and other Unix-like operating systems, and include a set of analysis tools helpful for both optimisations and verification of compiled ELF files.

Ruby-Elf is a pure-Ruby library for parse and fetch information about ELF format used by Linux, FreeBSD, Solaris and other Unix-like operating systems, and include a set of analysis tools helpful f...

GitHub

주간 개발 업데이트: JRuby, Hanami 개선 및 혁신적인 repo-sync 시스템 도입

JRuby 호환성 개선, Hanami 프레임워크 기능 확장 및 유지보수 업데이트가 활발히 진행되었습니다.

#hanami #jruby
https://ruby-news.kr/articles/continuations-202549-fit-of-procrastivity

주간 개발 업데이트: JRuby, Hanami 개선 및 혁신적인 repo-sync 시스템 도입

최신 Ruby, Rails, AI 관련 뉴스와 트렌드를 한곳에서 만나보세요

Ruby-News || 루비 AI 뉴스

Which JVM language fits your stack? #Kotlin for Android, #Scala for Spark, #Groovy for testing, #Clojure for concurrency, #JRuby & #Jython for scripts. Each solves different dev pains—fully JVM-compatible.

Mihaela Gheorghe-Roman shows the big picture: https://javapro.io/2025/10/09/the-rise-of-jvm-languages-kotlin-scala-groovy-and-more/