Is higher education broken? Not exactly.

What does it mean for higher education to work?

The problem with claiming (as I sometimes do) that higher education is broken and needs to be transformed is that it begs the question of what it means for higher education to work, and that depends what you think it is for.

From the name you’d expect that higher education might be for …well… education, assuming that to be concerned with learning and teaching, but it outgrew that single purpose a very long time ago. Yes, learning & teaching still looms large, but credentialing is at least as significant (often more so) and, at least for some, so are research or various forms of service.  But, depending on your perspective and context, a university or college might also or alternatively be thought of quite differently as, for example:

  • a driver of peace or prosperity in a society;
  • a creator of knowledge in the world;
  • a support for local economies;
  • training for industry;
  • a market for contract cheating;
  • a home for sports teams;
  • a sharer and preserver of cultural artifacts;
  • an incubator for the performing arts;
  • a means to get a better job;
  • a medical facility;
  • a production line for professors;
  • an enabler of social mobility;
  • a profit-/surplus-making business;
  • a political pawn;
  • a selection filter for smart people;
  • and so on, and on, and on.

You might reasonably object that you could take any one of these away apart from the teaching role and you would still be left with a recognizable educational institution and, indeed, some are possible only because of the teaching role. However, to some people, somewhere, some time, every one of those roles is the role that matters most, and might be a target for transformation. Like every instantiated technology, a university or college is an assembly. In fact it is a huge assembly. It is part of and contains countless other assemblies, and is thoroughly, deeply entangled with a host of other systems and subsystems on which it depends and that depend on it.  Everyone within it or interacting with it perceives it from a different perspective, in different ways at different times, working together or independently as mutually affective coparticipants to do whatever it is that, from each of those different perspectives, it does. In many ways, as a whole, it thus resembles an ecosystem and, like an ecosystem, each individual part can be perceived as having a goal and a relationship with other parts, and with the whole, but the whole itself does not. I think this is probably a feature of institutions in general, and may be what distinguishes them most clearly from simple organizations and businesses.

So what?

As long as the distinct roles, from each individual’s perspective, do their jobs, this is not a problem. If you are interested in, say, in getting an education then you can largely ignore everything else an educational institution does and judge it solely by whether it teaches, notwithstanding the huge complexities of knowing what that even means, let alone with what proxies to measure it.

Unfortunately, a fair number of these roles deeply and negatively impact others. For me, by far the biggest problem is that the credentialing role is fundamentally at odds with the teaching role, due to the profound negative impact of extrinsic motivation on intrinsic motivation (I’ve written a lot about this, e.g. in these slides and in How Education Works so I won’t repeat the arguments again here). Combined with the side effects of trying to teach everyone the same thing at the same time, this results in the vast majority of our most cherished teaching and assessment methods being nothing more than ways of restoring or replacing the intrinsic motivation sucked out of students by how we teach and assess.  Other big conflicts matter too, though. For instance, when patents or copyrights are at stake, the business role battles with the underlying goal of increasing knowledge in the world, turning non-rival knowledge into a rivalrous commodity; ditto for the insanity that is journal publishing, where the public pays us to provide our editorial and reviewing services for papers on research that they also pay for, then the journals sell the papers back to us or charge us for sharing them, making obscene profits for an increasingly trivial service; similarly, the research role, that should in principle exist in a virtuous circle with teaching, is too often in competition with it and, in many institutions, teaching loses; the filtering role that rewards most universities (not mine) for excluding as many students as possible is in direct conflict with a mission to bring higher forms of learning to as many people as possible, and undermines the incentive to teach well because those carefully selected students will learn pretty well regardless of how well they are taught. There are countless other examples like this: public vs private good, excellence vs equity, local vs global responsibilities, supporting student diversity vs economic stability, and so on. Fixing one role invariably impacts others, usually negatively. These are structural issues that will persist as long as higher education continues to play those roles. The solutions to the problems in one role are the problems that other roles have to solve, and (to a large extent) they must be.

At a micro scale the problem is even more ubiquitous. Everyone is solving problems in their own local sphere, creating problems for others in their own local spheres, whose solutions cause problems for others, and so it goes around and comes around. Every time we create a solution to one problem we give rise to other problems elsewhere. To give a few trivial and commonplace examples of issues I am trying to deal with right now:

  • I recently learned of two courses that could not be launched because tutors for the single course that they replace would have to be rehired and lose benefits gained for long service. In terms of priorities and primary roles, this implies that offering stable employment to staff matters more than teaching. That’s not the intent of any particular individual involved in the process but it’s how the system works, thanks to union agreements that solved different problems a long time ago.
  • For nearly 50 years now, our undergraduate students have had 6 months to complete a course, unless they are grant-funded (an important minority), in which case they only get 4 months because funding bodies assume universities always teach in semesters of a standardized length and demand results within that timeframe. And so we are in the process of making all contracts 4 months, knowing full well that students will be more pressured, cheating will increase, and pass rates will go down, but at least it will be fairer.
  • When we commit structures to code they are supposed to model the system but, having done so, they normally dictate it. For instance, my need for all of our faculty to be able to see the teaching sites of all of our courses (a critical part of my strategy to improve our teaching) is under threat due to the cascading roles used to determine who can do what that are baked into the implementation of our LMS and that make it difficult and long-winded for our editors to edit our courses, because the roles have to be modified each time they use its impersonation function that is necessary for viewing courses as they will be experienced. The obvious solution is to fix those roles, not remove access for those who need it, but the editors lack such rights, and those who have them support other faculties with different and conflicting needs.
  • We have recently shifted to a centralized front-line support system, explicitly to deal with common difficulties students have in navigating and using our administrative systems and websites. The more obvious solution would be to make those systems work better in the first place. Instead, we employ vast numbers of people whose job it is to patch over gaps, errors, and poor design decisions made elsewhere. This reduces the pressure to fix the systems, so the need persists, except that now we have a whole load of people with jobs that would be in jeopardy if we fix them. We employ many people whose job is to fix problems caused by issues with how others do theirs: people dedicated to exam cheating, say, or accommodating disabilities, or the aforementioned editors. There’s a fine and indistinct line between dividing a workload so that people with the right expertise do the right things, and creating a workload because people with the wrong expertise have done the wrong things.

I could easily write pages of similar examples and, if you work for a university or college, I’m sure you could too: the specific problems may be peculiar to Athabasca University, but the underlying dynamics are ubiquitous in higher education and, for that matter, most large organizations. And I’m sure that you can think of ways to deal with any of them but that’s exactly the point: fixing them is what we all do, all the time, every day, on a grand scale, and educators have been doing so for nearly 1000 years so the number of fixes to fixes to fixes to fixes is vast.  For almost any role or activity, no matter how small or how large, there is probably another role and set of activities on which it impinges, directly or otherwise.

The big problem is that, on the whole, we create counter-technologies to fix the worst of the problems and that’s a policy of despair, every counter-technology creating new problems for further counter-technologies to solve. In fact, a large part of the reason for all those many roles is precisely because counter-technologies were created to solve what probably seemed like pressing problems and, in an inevitable Faustian bargain, created the problems we now need to address. Every one of these counter-technologies increases the robustness of the whole, increasing the interdependencies, making the patterns more and more indelible so, even if we do occasionally come up with something truly different, the overall system holds together as a massive web of mutually interdependent pieces more strongly than ever.

The more things change…

For all the many structural problems, it would be a synecdochic fallacy of mistaking the part for the whole to describe higher education as broken. Sure, thanks to all those competing roles (especially credentialing) it is not particularly great at education (at least), so transformation is devoutly to be wished for but, by the most basic and essential criterion of all –  survival – it is rampantly successful. In fact, it is exactly those competing and complementary roles that have sustained it because a diverse ecosystem is a resilient ecosystem. The webs of dependencies are mutually sustaining even, to a well-evolved point, when one is antagonistic to the other.

For nearly a millennium the university and its brethren have not only survived but have now spread to almost every populated region of the world, and they continue to expand. Within my lifetime, in my country of birth, enrolments in higher education have risen from around 5% of the population to around 50%. To achieve such success, it has had to evolve: the invention of written exams, say, in the 18th Century, Humboldtian models that justified and embedded research, the adoption of flexible curricula, or the admittance of women in the 19th Century, were all huge changes. It has lost the trivium and quadrivium along the way, and diversified enormously in the range of subjects taught. The technological systems are way more advanced and varied than they were.  There are regional variations, and a few speciated niches (colleges, open universities, distance education, etc). Administratively, a lot has changed, from recruitment and enrolment to the roles of professional bodies, industry, and governments.  It is constantly evolving, for sure.

But.

The main technological features that universities acquired in the first century of their existence are still fully present, in virtually unaltered form.  Courses, classes, terms/semesters, professors, credentials, methods of teaching, organizational structures, methods of assessment, and plenty more are visibly the same species as their mediaeval forebears, and remain the central motifs of virtually all formal higher education. We may use a few more polyesters and zippers, and the gowns now come in women’s sizes but, at least once a year, many of us even dress the same, a behaviour shared with only a few other institutions like (in some countries) the legal profession or the church. On the subject of which, most universities continue to have roles like dean, chancellor, rector, provost, registrar, bursar and even the odd beadle (what even is that?) that not only reveal their ecclesiastic origins but also how little the basic entities in the system have since evolved.

If the purpose of higher education were simply to educate then we would expect it to work a lot better and to see a whole load more variation in how it is done, especially given the wide range of technologies that can now be used to overcome the problems caused by those features, but we don’t. It’s not just the purpose that survives: it’s the form. We can radically alter a great many processes  but changing at least one or two of the central motifs themselves – which, to me, is what “transformation” must entail – is hardly never even on the table.

Adaptation, not transformation

If the institution had a clear overriding goal then we could re-engineer it to work differently, but this is not an engineering problem: it’s an evolutionary problem. We build with what we have on what we have, a process of tinkering or bricolage that is anything but engineered. It is, though, not natural but technological evolution. In natural ecosystems massive disruption can occur when populations become isolated, or when the environment radically changes. Technological evolution emerges through recombination and assembly of parts, not genes, and the technologies of higher education have evolved to be globally connected and massively intertwingled with nearly every other part of nearly every society, making isolation virtually impossible. In nature, ecosystems can be disrupted by invasive species, parasites, etc, but our educational systems – technologies one and all – have evolved to be great at absorbing stuff rather than competing with it, so even that path is fraught. Even something as apparently disruptive as generative AI, which is impacting almost every aspect of the system and all the systems with which it interacts, is currently causing reinforcement of objectives-driven models of teaching, (at least in Western countries) cultural individualism, and highly traditionalist solutions to fears of cheating like written and oral exams at least as much as it is inspiring change.

For those of us who care about the education role, there are plenty of ways we could actually transform it if we had the power to make the necessary changes. Decoupling learning and assessment would be a good start. Not just separating teaching and tests: that would just result in teaching to the test, as we see now. The decoupling would have to be asymmetrical, so the assessed tasks would demand synthesis of many taught things. Or we could get rid of classes and courses: to a large extent, this is what (despite the name) many Connectivist MOOCs have attempted to do, and it is also the pattern behind things like the Kahn Academy or Connect North’s AI Tutor Pro, not to mention traditional PhDs (at least in some countries), apprenticeship models of learning, most instructional videos on sites like YouTube, or Stack Exchange or Quora, and the bulk of student projects (like MOOCs, labelled as courses but lacking most if not all of their traditional trappings). Or we could keep courses but drop the schedules and time limits. If nothing else, imagining how things might work if we messed with those central motifs is a good way to stimulate creative use of what we have. If done at scale, such things could make a huge impact on our educational systems.

But they probably won’t.

The problem always comes back to the fact that, though (collectively) we could change the fitness landscape itself, making survival dependent on whatever we think matters most, we are unlikely to agree what does matter most. For some, better higher education would be measured in credentials, or explicit learning outcomes, or better fits with industry needs. Others would like it to advance their personal careers or status, or to do research without a profit motive. For me, improvements would be in far harder-to-measure aspects like building safer, kinder, smarter, more creative societies. Unfortunately (for me and others who feel that way), thanks to pace layering, the ones who could shape the fitness landscape the most are governments, and they are the least likely to do so. Governments tend to prefer things that are easier to measure, quicker to show results, that are most likely to keep voters voting for them and sponsors (especially from industry) sponsoring them. Increasingly, institutional mandates are measured by industry-impact, which does erode some traditional aspects of higher education but that reinforces the big ones, like the measurable, assessed, outcome-driven course, with its classes, its schedules, its semesters, its textbooks, its assessments, its teachers, and so on. It doesn’t have to, in principle but, in practice, those are not the things we adapt. If radical transformation ever does occur it will therefore most likely be the result of something so disruptive that the loss of higher education would be a minor concern: devastation caused by climate change, or nuclear war, or being hit by a large asteroid, for instance. And, to be honest, I’m not even sure that would be enough.

The limited chances of success should not discourage us from tinkering, all the time, whenever we can. Evolution must happen because the world that higher education inhabits evolves so, if this is the system we are stuck with, we should make it do what we want it to do as best we can.  There are usually ways to reduce dependencies, techniques to decouple antagonistic roles, strategies of simplification, approaches to parcellating the landscape (skunkworks, etc), and values-based principles for prioritizing activities that can make it more likely that the changes will be successful and persistent. However, if we have learned anything from biological studies over the past many decades, it is that you shouldn’t mess with an ecosystem. Whatever we do will put it out of balance, and self-organizing dynamics will ensure that either the balance will be restored, or that it spirals out of control and breaks altogether. Either way, it will never be exactly what we planned and, on average, it will tend to eventually keep things much the same as they are while making most of it worse while it restabilizes itself.

Knowing that, though, can be useful. If every change will result in changes elsewhere, it is not enough to monitor the direct impact of an intervention: rather, we need to figure out ways of harvesting the outcomes across the system and/or, as best we are able, to model them in advance. No one has access to more than a fraction of the information needed, not least because a because a significant amount of it is tacit, embedded in the culture and practices of people and communities within the system. However, we can try to intentionally capture it, to tell stories, to share experiences and understandings across all those many niches. We can do what we can to make the invisible visible. We can talk. And we have technologies to help, inasmuch as we can train AIs to know our stories and ask them about the impacts of things we do, and point out impacts that would be difficult if not impossible for any person to do. And that, I think, is the only viable path we have. The problems we generally have to deal with are a direct result of local thinking: solutions in one space that cause problems in another. The less locally we think about such things, the greater the chances that we will avoid unwanted impacts elsewhere or, equally good, that we will cause wanted impacts. To achieve that demands openness and dialogue, channels through which we can share and communicate, and some way of compressing, parsing, and relaying all that so that sharing and communication is not the only thing we ever do. This is not an impossibly tall order but it certainly isn’t easy.

#AI #change #complexAdaptiveSystem #design #ecosystem #education #evolution #higherEducation #learning #motivation #paceLayering #purpose #synecdoche #synecdochicFallacy #transformation

Generative vs Degenerative AI (my ICEEL 2025 keynote slides)

I gave my second keynote of the week last week (in person!) at the excellent ICEEL conference in Tokyo.  Here are the slides: Generative AI vs degenerative AI: steps towards the constructive transformation of education in the digital age. The conference theme was “AI-Powered Learning: Transforming Education in the Digital Age”,  so this is roughly what I talked about…

Transformation in (especially higher) education is quite difficult to achieve.  There is gradual evolution, for sure, and the occasional innovation, but the basic themes, motifs, and patterns – the stuff universities do and the ways they do it – have barely changed in nigh-on a millennium. A mediaeval professor or student would likely feel right at home in most modern institutions, now and then right down to the clothing. There are lots of path dependencies that have led to this, but a big part of the reason is down to the multiple subsystems that have evolved within education, and the vast number of supersystems in which education participates. Anything new has to thrive in an ecosystem along with countless other parts that have co-evolved together over the last thousand years. There aren’t a lot of new niches, the incumbents are very well established, and they are very deeply enmeshed.

There are several reasons that things may be different now that generative AI has joined the mix. Firstly, generative AIs are genuinely different – not tools but cognitive Santa Claus machines, a bit like appliances, a bit like partners, capable of becoming but not really the same as anything else we’ve ever created. Let’s call them metatools, manifestations of our collective intelligence and generators of it. One consequence of this is that they are really good at doing what humans can do, including teaching, and students are turning to them in droves because they already teach the explicit stuff (the measurable skills and knowledge we tend to assess, as opposed to the values, attitudes, motivational and socially connected stuff that we rarely even notice) better than most human teachers. Secondly, genAI has been highly disruptive to traditional assessment approaches: change (not necessarily positive change) must happen. Thirdly, our cognition itself is changed by this new kind of technology for better or worse, creating a hybrid intelligence we are only beginning to understand but that cannot be ignored for long without rendering education irrelevant. Finally genAI really is changing everything everywhere all at once: everyone needs to adapt to it, across the globe and at every scale, ecosystem-wide.

There are huge risks that it can (and plentiful evidence that it already does) reinforce the worst of the worst of education by simply replacing what we already do with something that hardens it further, that does the bad things more efficiently, and more pervasively, that revives obscene forms of assessment and archaic teaching practices, but without any of the saving graces and intricacies that make educational systems work despite their apparent dysfunctionality. This is the most likely outcome, sadly. If we follow this path, it ends in model collapse for not just LLMs but for human cognition. However, just perhaps, how we respond to it could change the way we teach in good if not excellent ways. It can do so as long as human teachers are able to focus on the tacit, the relational, the social, and the immeasurable aspects of what education does rather than the objectives-led, credential-driven, instrumentalist stuff that currently drives it and that genAI can replace very efficiently, reliably, and economically. In the past, the tacit came for free when we did the explicit thing because the explicit thing could not easily be achieved without it. When humans teach, no matter how terribly, they teach ways of being human. Now, if we want it to happen (and of course we do, because education is ultimately more about learning to be than learning to do), we need to pay considerably more deliberate attention to it.

The table below, copied from the slides, summarizes some of the ways we might productively divide the teaching role between humans and AIs:

Human Role (e.g.)

AI role (e.g.)

Relationships

Interacting, role modelling, expressing, reacting.Nurturing human relationships, discussion catalyzing/summarizing

Values

Establishing values through actions, discussion, and policy.Staying out of this as much as possible!

Information

Helping learners to see the personal relevance, meaning, and value of what they are learning. Caring.Helping learners to acquire the information. Providing the information.

Feedback

Discussing and planning, making salient, challenging. Caring.Analyzing objective strengths and weaknesses, helping with subgoals, offering support, explaining.

Credentialling

Responsibility, qualitative evaluation.Tracking progress, identifying unprespecified outcomes, discussion with human teachers.

Organizing

Goal setting, reacting, responding.Scheduling, adaptive delivery, supporting, reminding.

Ways of being

Modelling, responding, interacting, reflecting.Staying out of this as much as possible!

I don’t think this is a particularly tall order but it does demand a major shift in culture, process, design, and attitude.  Achieving that from scratch would be simple. Making it happen within existing institutions without breaking them is going to be hard, and the transition is going to be complex and painful. Failing to do so, though, doesn’t bear thinking of.

Abstract

In all of its nearly 1000-year history, university education has never truly been transformed. Rather, the institution has gradually evolved in incremental steps, each step building on but almost never eliminating the last. As a result, a mediaeval professor dropped into a modern university would still find plenty that was familiar, including courses, semesters, assessments, methods of teaching and perhaps, once or twice a year, scholars dressed like him. Even such hugely disruptive innovations as the printing press or the Internet have not transformed so much as reinforced and amplified what institutions have always done. What chance, then, does generative AI have of achieving transformation, and what would such transformation look like?
In this keynote I will discuss some of the ways that, perhaps, it really is different this time: for instance, that generative AIs are the first technologies ever invented that can themselves invent new technologies; that the unprecedented rate and breadth of adoption is sufficient to disrupt stabilizing structures at every scale; that their disruption to credentialing roles may push the system past a tipping point; and that, as cognitive Santa Claus machines, they are bringing sweeping changes to our individual and collective cognition, whether we like it or not, that education cannot help but accommodate. However, complex path dependencies make it at least as likely that AI will reinforce the existing patterns of higher education as disrupt them. Already, a surge in regressive throwbacks like oral and written exams is leading us to double down on what ought to be transformed while rendering vestigial the creative, relational and tacit aspects of our institutions that never should. Together, we will explore ways to avoid this fate and to bring about constructive transformation at every layer, from the individual learner to the institution itself.

#complexAdaptiveSystem #education #evolution #genai #iceel #institutions #keynote #learning #presentation #teaching #transformation

Your outrage won't defeat Trump. A bigger, better movement can

Some thoughts from this morning

The.Ink

Here are the slides from my presentation at AU’s Lunch ‘n’ Learn session today. The presentation itself took 20 minutes and was followed by a wonderfully lively and thoughtful conversation for another 40 minutes, though it was only scheduled for half an hour. Thanks to all who attended for a very enjoyable discussion!

The arguments made in this were mostly derived from my recent paper on the subject (Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020) but, despite the title, my point was not to reject the use of generative AIs at all. The central message I was hoping to get across was a simpler and more important one: to encourage attendees to think about what education is for, and what we would like it to be. As the slides suggest, I believe that is only partially to do with the objectives and outcomes we set out to achieve,  that it is nothing much at all to do with the products of the system such as grades and credentials, and that focus on those mechanical aspects of the system often creates obstacles to the achievement of it. Beyond those easily measured things, education is about the values, beliefs, attitudes, relationships, and development of humans and their societies.  It’s about ways of being, not just capacity to do stuff. It’s about developing humans, not (just) developing skills. My hope is that the disruptions caused by generative AIs are encouraging us to think like the Amish, and to place greater value on the things we cannot measure. These are good conversations to have.

https://jondron.ca/presentation-generative-ais-in-learning-teaching-the-case-against/

#AI #complexAdaptiveSystem #education #GAI #genAI #generativeAI #learning #teaching #technology #values

A month or two ago I shared a “warts-and-all” preprint of this paper on the risks of educational uses of generative AIs. The revised, open-access published version, The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education is now available in the Journal Digital.

The process has been a little fraught. Two reviewers really liked the paper and suggested minimal but worthwhile changes. One quite liked it but had a few reasonable suggestions for improvements that mostly helped to make the paper better. The fourth, though, was bothersome in many ways, and clearly wanted me to write a completely different paper altogether. Despite this, I did most of what they asked, even though some of the changes, in my opinion, made the paper a bit worse. However, I drew the line at the point that they demanded (without giving any reason) that I should refer to 8 very mediocre, forgettable, cookie cutter computer science papers which, on closer inspection, had all clearly been written by the reviewer or their team. The big problem I had with this was not so much the poor quality of the papers, nor even the blatant nepotism/self-promotion of the demand, but the fact that none were in any conceivable way relevant to mine, apart from being about AI: they were about algorithm-tweaking, mostly in the context of traffic movements in cities.  It was as ridiculous as a reviewer of a work on Elizabethan literature requiring the author to refer to papers on slightly more efficient manufacturing processes for staples. Though it is normal and acceptable for reviewers to suggest reference to their own papers when it would clearly lead to improvements, this was an utterly shameless abuse of power of a scale and kind that I have never seen before. I politely refused, making it clear that I was on to their game but not directly calling them out on it.

In retrospect, I slightly regret not calling them out. For a grizzly old researcher like me who could probably find another publisher without too much hassle, it doesn’t matter much if I upset a reviewer enough to make them reject my paper. However, for early-career researchers stuck in the publish-or-perish cycle, it would be very much harder to say no. This kind of behaviour is harmful for the author, the publisher, the reader, and the collective intelligence of the human race. The fact that the reviewer was so desperate to get a few more citations for their own team with so little regard for quality or relevance seems to me to be a poor reflection on them and their institution but, more so, a damning indictment of a broken system of academic publishing, and of the reward systems driving academic promotion and recognition. I do blame the reviewer, but I understand the pressures they might have been under to do such a blatantly immoral thing.

As it happens, my paper has more than a thing or two to say about this kind of McNamara phenomenon, whereby the means used to measure success in a system become and warp its purpose, because it is among the main reasons that generative AIs pose such a threat. It is easy to forget that the ways we establish goals and measure success in educational systems are no more than signals of a much more complex phenomenon with far more expansive goals that are concerned with helping humans to be, individually and in their cultures and societies, as much as with helping them to do particular things. Generative AIs are great at both generating and displaying those signals – better than most humans in many cases – but that’s all they do: the signals signify nothing. For well-defined tasks with well-defined goals they provide a lot of opportunities for cost-saving, quality improvement, and efficiency and, in many occupations, that can be really useful. If you want to quickly generate some high quality advertising copy, the intent of which is to sell a product, then it makes good sense to use a generative AI. Not so much in education, though, where it is too easy to forget that learning objectives, learning outcomes, grades, credentials, and so on are not the purposes of learning but just means for and signals of achieving them.

Though there are other big reasons to be very concerned about using generative AIs in education, some of which I explore in the paper, this particular problem is not so much with the AIs themselves as with the technological systems into which they are, piecemeal, inserted. It’s a problem with thinking locally, not globally; of focusing on one part of the technology assembly without acknowledging its role in the whole. Generative AIs could, right now and with little assistance,  perform almost every measurable task in an educational system from (for students) producing essays and exam answers, to (for teachers) writing activities and assignments, or acting as personal tutors. They could do so better than most people. If that is all that matters to us then we might as well therefore remove the teachers and the students from the system because, quite frankly, they only get in the way. This absurd outcome is more or less exactly the end game that will occur though, if we don’t rethink (or double down on existing rethinking of) how education should work and what it is for, beyond the signals that we usually use to evaluate success or intent. Just thinking of ways to use generative AIs to improve our teaching is well-meaning, but it risks destroying the woods by focusing on the trees. We really need to step back a bit and think of why we bother in the first place.

For more on this, and for my tentative partial solutions to these and other related problems, do read the paper!

Abstract and citation

This paper analyzes the ways that the widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. Methodologically, the paper applies a theoretical model and grounded argument to present a case that GAIs are different in kind from all previous technologies. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans’ participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique, performed creatively or idiosyncratically). Education may be seen as a technological process for developing these soft and hard techniques in humans to participate in the technologies, and thus the collective intelligence, of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain; the very things that technologies enabled us to do can now be done by the technologies themselves. Because they replace things that learners have to do in order to learn and that teachers must do in order to teach, the consequences for what, how, and even whether learning occurs are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs. Its distinctive contributions include a novel means of understanding the distinctive differences between GAIs and all other technologies, a characterization of the nature of generative AIs as collectives (forms of collective intelligence), reasons to avoid the use of GAIs to replace teachers, and a theoretically grounded framework to guide adoption of generative AIs in education.

Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21104429/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

https://jondron.ca/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education/

#AI #cas #complexAdaptiveSystem #digitalEducation #distanceEducation #eLearning #education #GAI #genAI #generativeAI #learning #onlineLearning #openEducation

Preprint – The human nature of generative AIs and the technological nature of humanity: implications for education

Here is a preprint of a paper I just submitted to MDPI’s Digital journal that applies the co-participation model that underpins How Education Works (and a number of my papers over the last fe…

Jon Dron's home page
Language Within and Beyond the Brain

As I was preparing for a session I was facilitating, I went down a rabbit hole on language use and cognition. I know saying “went down th...

manderson
@jef Read Hofstadter, Pirsig, I am still puzzled about that one. Wright - Nonzero - Good. And Watts.
Have a go at Eric Beinhocker's Wealth https://www.amazon.ca/Origin-Wealth-Remaking-Economics-Business/dp/1422121038
#ComplexAdaptiveSystem #Sugarscape
Amazon.ca

@jmkorhonen Yeah, I got lucky & landed on a large limit server of 5k.

I'm not sure where 'unbreakable' comes into the picture. Perhaps you can expound there.

A system is distribution of power among interacting parts in and of itself, or it wouldn't be a system, it would just be some thing with a storage unit and processors, etc.

So '#redistribution' is essentially what has to happen at the end of some period where there was failure to do that initially. Of course, our randomly assembled human attempts at systems building fall short of achieving this #dynamic balance, as it's not really the goal of the individuals coordinating them, even though they often deceptively frame it that way. Not to mention a regular lack of understanding of reality as a #ComplexAdaptiveSystem. This is easy to check by examining the stated reasons nations, cities, corporations, etc. exist.

To give this binary clarity, we can divide a system into how they're formed by us vs. how they function. (nature does these at the same time)

For example, take the US constitution, that pays lip service to things like 'balance of power', #democracy, & equality. None of those things are actually built into the system, and in fact, their opposites are more prevalent. This is the extra layer that we have to contend with, as our system #information can be corrupted, and misdirect efforts to solve the problems at the source. Even more bifurcations of the inherent #power of the members of the system is done via the long-standing method of conquering by division, thereby hiding the tracks of the original deception.

With proper dynamic balance, the 'power' infrastructure is distributed by construction, and things like 'wealth redistribution' happen by separate means, and would be less necessary. Some occasional minor adjustments are fine.

This is what I mean by the 'band-aid' approach. Redistributing #wealth, long after its creation, by means of #taxation, charity, etc. *cannot* also change the distribution of power inherent to the design. It just maintains that status quo, and tosses in word salad apologies during election cycles.

A #Reconstruction is required to change a system. That will follow naturally, of course. The question is, will it be after #chaotic destruction, or an organized disassembly?