https://brayanbrayan.github.io/2026/04/02/rlhf-post-blog.html
Working on intersection of system and machine learning
DataFlex
Data Centric AI Training System
Qwen3.6-Plus: Towards Real World Agents
Top executives and engineers at Anthropic (the creators of Claude code) have stated that they are no longer writing most of their code manually and their AI writes all code. They claims Software engineers could go extinct this year end 100%.
But, Claude code source code leaked was blamed on human engineer error.
Yeah right. Lmao. C-Suits never take any responsibility and some poor engineer may end up losing job now.
Pattern Matching and Rewriting (PMR) is a compiler optimization step that identifies predefined code idioms and replaces them with optimized code, offering performance gains across various applications. Recent research advances have led to tools that expedite PMR optimizations. One such technique, Source Matching and Rewriting (SMR), employs a user-centric, source-code-based approach, thus eliminating the need for specialized compiler intervention. However, achieving comprehensive pattern-matching coverage with SMR requires the meticulous specification of as many idiom variations as possible by the user, which is a laborious and error-prone task. This article introduces Pattern Generation Language (PGL), a framework designed to simplify the automatic generation of pattern variations. PGL is a high-level language that enables users to specify program patterns that can be matched and rewritten by SMR. This article also proposes the Pattern Generation Compiler (PGC), an SMR-compatible tool that automates the creation of idiomatic variations and the synthesis of patterns written in the PGL language. While PGC primarily focuses on generating input patterns for SMR, its flexibility allows adaptation for other pattern-matching and rewriting systems. Experimental results show that PGL can identify 113% more patterns in Fortran and C code than in manual pattern specification. Matched patterns have been replaced with calls to an optimized BLAS library, enhancing program performance. Experiments using a linear algebra benchmark and a set of real-world programs revealed significant speedups.
