Alonso Silva 

134 Followers
197 Following
177 Posts
Researcher on Verifiable AI at Nokia Bell Labs | Safran, Berkeley, Inria alumni | Franco-chilean in Paris | Interested about Machine Learning and Data Science
GitHubhttps://github.com/alonsosilvaallende
If you are (or know of) a Master's or PhD student looking for an internship, I am proposing the subject: 'Efficient Structured Generation with Grammar-Aware Sampling Techniques.' https://www.dropbox.com/scl/fi/7iwfgc4waiszm15urhp1q/Internship_Proposal_2026.pdf?rlkey=3dnmf7nsr5kh9bfx5wwuncwwx&st=taxikz6u&dl=0
If you're passionate about structured generation, feel free to reach out!
Internship_Proposal_2026.pdf

Shared with Dropbox

Dropbox

itelines got added to the Awesome LLM constrained decoding repo 😊
It’s great to share this space with more established libraries like Outlines, XGrammar, or Guidance.

Link to the Awesome LLM constrained decoding repo:
https://github.com/Saibo-creator/Awesome-LLM-Constrained-Decoding

Link to litelines:
https://alonsosilvaallende.github.io/litelines/

You can display SQLite database diagrams in @marimo_io using `fastlite` and `graphviz`
Here is a basic notebook to play online:
https://molab.marimo.io/notebooks/nb_ktbJEaXyet6QEUYiXgVf34
Here is my merged PR πŸ™‚
https://github.com/marimo-team/marimo/pull/7787

.@marimo_io now supports graphviz

Here is a basic notebook to play online: https://molab.marimo.io/notebooks/nb_3xrDJBQEtxscgkKqAo4TCK

Here is the merged PR:
https://github.com/marimo-team/marimo/pull/7787

Batch processing using transformers and litelines libraries. In this video, I process 900 prompts in 30 seconds with an RTX A4000 with 16GB of VRAM.
https://www.youtube.com/watch?v=7hVUPXxuetk
Batch processing using transformers and litelines libraries

YouTube
The new litelines release should work much better in marimo notebooks
You can try it in a marimo molab:
https://molab.marimo.io/notebooks/nb_232GhR7P6XJVxz8opD1MWS/app
Here is litelines documentation:
https://alonsosilvaallende.github.io/litelines/
Here is the release changelog:
https://github.com/alonsosilvaallende/litelines/releases/tag/v0.1.4
How is it possible that a 1.7 billion parameter model succeeds where a model with hundreds or thousands of billions of parameters fails?
https://www.youtube.com/watch?v=wmgwuTMauvU
Qwen3-1.7B beating GPT-4o at a lipogram task

YouTube

The latest release of Litelines supports batch processing for Transformers library.
`pip install --upgrade litelines`

Here is a colab to get started:
https://huggingface.co/datasets/alonsosilva/litelines-notebooks/blob/main/Litelines_Batch_Processing_Multiple_Choice.ipynb

And here is the library documentation:
https://alonsosilvaallende.github.io/litelines/

My talk, "Processors for Language Models," at PyData Paris 2025 is now available. I discuss my personal project, Litelines, as well as common libraries used to transform unstructured data into structured data, such as Instructor, DSPy, BAML, Outlines, XGrammar, and Guidance.
Here is the link to the video:
https://www.youtube.com/watch?v=VP4IqFpdPec&t=545s
Lightning talks - session 1

YouTube