| Lab Website | https://greenelab.com/ |
| https://twitter.com/GreeneScientist | |
| DBMI @ CU | https://medschool.cuanschutz.edu/dbmi |
| GitHub | https://github.com/greenelab/ |
| Lab Website | https://greenelab.com/ |
| https://twitter.com/GreeneScientist | |
| DBMI @ CU | https://medschool.cuanschutz.edu/dbmi |
| GitHub | https://github.com/greenelab/ |
If you're watching TV in #Denver this month, you might catch me on a commercial for #CUAnschutz. 🙀 🙀 🙀 🙀
It includes highlights of work with #BigData and #AI/#ArtificialIntelligence
https://news.cuanschutz.edu/news-stories/cu-anschutz-returns-to-airwaves-with-possibilities-endless
The NIH is requesting input on postdoc career structure and funding (Closing date in April 14th).
https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-084.html
NIH Funding Opportunities and Notices in the NIH Guide for Grants and Contracts: Request for Information (RFI): Re-envisioning U.S. Postdoctoral Research Training and Career Progression within the Biomedical Research Enterprise NOT-OD-23-084. OD
Seems sensible to me right now, but with the speed of change, how long will this hold true? What properties or abilities would you expect an AI to have for it to qualify for authorship in a scholarly context?
Personal responsibility, consent and agency come to mind, but I have no idea how one would validate these properties.
I suspect scientific editors will be confronting this soon. 2/
Always makes me think of this XKCD, and somehow the answer turns out to actually, finally, be yes!