Professional Scrum Trainer @Scrumdotorg
Professional Kanban Trainer @prokanban
Formerly @Metrist_io, @CodingDojoDotCo, @digitalocean, @OriumInc
| www | https://betterteams.academy |
| X | https://X.com/DaveSabine |
| https://ca.linkedin.com/in/davidsabine |
| www | https://betterteams.academy |
| X | https://X.com/DaveSabine |
| https://ca.linkedin.com/in/davidsabine |
Teams can earn stakeholder’s trust and improve shared understanding by explicitly declaring how AI-generated material is used, verified, and integrated.
For example: include this info in your team's Definition of Done.
Consider adding to your Definition of Done:
* AI-generated code is clearly marked
* No AI model outputs in production code not covered by human-managed acceptance tests
* Prompt + model version + date recorded for significant AI-generated sections
If my clients would listen to just one piece of advice:
* teams are the most valuable performance unit.
* small teams adapt faster than large teams.
(that's 2 pieces of advice)
AI trends indicate that, as cognitive grunt work vanishes, small teams are able to make rapid decisions and keep bureaucracy to a minimum.
Small teams (3 to 5 max) are more nimble and learn (together) faster.
Executives tend to think “more people = more productivity”.
But the bottleneck in knowledge work is always the pace of decision-making (i.e., bureaucracy).
More people equals more bureaucracy equals slower teams.
This article illustrates a classic problem with the common use of #velocity as a metric.
https://betterteams.fm/articles/2019/06/Velocity-Escape-this-Pitfall/