How Will AI Change The Teaching Model In Business Schools?

How Will AI Change The Teaching Model In Business Schools?

Robot teacher and students work together at blackboard at university classroom. Digital assistant … [+] and people brainstorm at board in college class. Artificial intelligence. Flat vector illustration.

getty

AI raises hard questions of our teaching model. I suggest a few solutions.

A colleague said to me the following at a conference, “90% of what is taught in the MBA at my business school (a top one at that) can be sourced from AI.” So, how should our roles evolve in the coming years?

The problem:

Our educational model is predicated on an expert lecturing to a class of attentive disciples. We rely on standardized repeated exams. As a junior accounting professor, I was encouraged to take a senior faculty member’s teaching notes and simply go and deliver the material. Rely on older exams or some variant thereof. Have a standard key to grade the exams. In essence, minimize the time spent on teaching so that I can work on research and get tenure. To be fair, my senior professors were watching out for me and were trying to put me on the shortest path to tenure.

I suspect AI will kill or at least severely challenge this teaching model. My lecture notes, either written out on the board or later in slides, contained condensed takeaways on the detailed rules to account for transactions. The usual intro and intermediate accounting classes cover the following topics: basic principles behind financial statements, revenue, cash, cash and cash equivalents, inventory, accounts receivable, property plant and equipment (PPE including depreciation and depletion, intangible assets, and leases), current liabilities and contingencies, long term liabilities (including leases, pension obligations, post-retirement obligations), stock holder’s equity, calculating EPS (earnings per share), investments, income taxes, and accounting changes and errors.

ChatGPT can generate excellent summaries and talking points that a lecturer will use in a classroom on each of these topics. So, my lecture notes from my undergraduate teaching days are, in effect, public, and available to all interested. ChatGPT can generate notes for a student in the style of a high school student and produce quizzes to help the student test her understanding of the material. One must wonder about the economics of textbook publishing in the post ChatGPT world.

Of course, you could argue that learning is a social process as we discovered during Covid. Students were clamoring for in-person classes more perhaps to hang out with one another than with us faculty. You could also argue that there is still some value in an instructor vocalizing these ideas in class. A colleague points out the difference between push and pull models of content: “even if videos are available, do students have the discipline to view them before class? I don’t think so. I even tried 5-minute videos like Tik Tok to no avail.”

Perhaps a classroom experience provides social pressure for students to show up in person and set aside time to listen to and think about a topic. But by how much? Half my students are fidgeting with their phones and iPads given that accounting is inherently dry and difficult.

A colleague I showed this to observes, “people pay $300+ to go see a musical or a play when they could watch on TV for free so that suggests there is something about “being there” that matters, and I believe there is a literature in economics or education that measures peer effects with some papers showing a large effect.” That’s fair but Taylor Swift can fill concert arenas. How do the other musicians compete? Of course, repeating standardized take-home exams is a non-starter in the days of ChatGPT. So, what does one do?

Is the Oxbridge tutorial system the answer?

One answer to AI’s disruption to the field is to increase the human element in education. Adrian Woolridge suggests that institutions adopt the tutorial system of Oxford and Cambridge. The tutorial, as per Oxford’s website is a “weekly meeting (where) all students have with a tutor and usually one or two other students, at which you are expected to talk in depth about your ideas and opinions relating to that week’s reading or problems.” I suspect this model is a non-starter for American business schools. A business school that recruits 2000 odd students will have to then hire hundreds of tutors. The costs are potentially prohibitive.

The other more scalable alternative is to try and recreate the tutorial system via chatbots. Instead of putting a human tutor in class, can we design a chatbot which gets close to what the tutorial system accomplishes that the Oxford system hopes to get from tutors: “the freedom to push yourself academically and direct your own learning. Equally, tutors are able to monitor your progress closely and help you with any problems with your work at a very early stage.”

On a related point, a colleague observes, “maybe AI will facilitate a way of personalizing the topics to a faculty member by adapting to a style or example that fits them.” That is, a standard HBS case can be adapted by a faculty member using AI to suit her style of teaching via Notebook LMs for instance. The north star in this regard might be the Bloom two sigma standard whereby the average student tutored one-to-one using mastery learning techniques performed two standard deviations better than students educated in a classroom environment. Can we build chatbots that realize the potential of the Bloom two sigma standard in a classroom of 60-70 students?

Even here, caution is warranted. A colleague at another top school points out, “we have over 30 tutors available for help, but students use them only before exams. What does this tell us? “

How should we think about AI in teaching?

I suggest that we consider content we deliver in three categories:

· Can the student understand what AI is saying (intro classes)?

· Can the student critique what AI is saying (early electives)? and

· Can the student leverage AI to be a force multiplier in extending her work (advanced electives)?

Intro classes:

The “core” classes that we teach in accounting, finance, economics, marketing, operations, strategy and leadership will most likely continue as before. Students need to be exposed to the basic concepts in the area. Just because the rules of how to account for revenue, costs, assets and liabilities are on the web or ChatGPT does not mean students need not learn the basic principles in the area. In fact, the rules on how to account for transactions have always been available in paper manuals in libraries even before usage of the internet was widespread. Interestingly, ChatGPT is very good at addressing basic questions in accounting. In fact, ChatGPT and Claude Opus convincingly passed the CPA exam as per a recent paper.

The CPA exam tends to focus on testing a student’s memory of rules prescribed by GAAP and auditing standards. Unlike many, I don’t think memorizing rules is a complete waste of time. Memorizing multiplication tables made me better at math, not worse. It takes a certain amount of repetition to get the ideas steeped in one’s brain.

Having said that, assignments and testing in intro classes might want to start building in ideas beyond an exposition of just accounting rules. Perhaps the discussion in the classroom can be elevated to teaching the conceptual foundations behind why the income statement and balance sheet are organized the way they are. Conversations can also focus on the difficulty of getting accounting income to measure the economic value add of the firm. That is, teach rules in perhaps the first half of the class and elevate the rest of the class to relatively ambiguous situations where measurement of income and assets is a challenge. Or extend the discussion of why alternative ways of accounting and reporting were not pursued by the firm.

A colleague suggests, “Coursera already teaches business concepts, including accounting. If all we do is intro-level teaching of basic concepts, then our value-add will be minimal and we’re easily replaceable and deservedly so. But we’ve seen form prior versions of MOOCs (Massive Open Online Courses) that classroom teaching has its place.”

Another colleague observes, “I believe in emphasizing a way of thinking rather than rules and to challenge students to think about alternative outcomes, even in the core. AI makes this potentially easier to do and hopefully to guide the students. Why would anyone teach debits and credits and t-accounts in a world of AI?”

A co-author suggests, “our classroom teaching has to be elevated to focus on things we need the classroom for – judgement, decision making, understanding different perspectives, more nuanced technical thinking, how to make reasonable assumptions, etc. I think this point will be the key for the next generation of teachers. I’ve been trying to have students do some basic analysis using ChatGPT and come to class ready with materials we would have spent the first 20 minutes on. That way, we can make more progress in class.”

Another commenter tempers our enthusiasm somewhat, “we have recently adopted the “flipped” classroom approach in the daytime core accounting but with mixed success. Downside: students do not prepare at all! As they expect to be taught with no prior reading done.”

Exam and assignments:

It has become hard to write an exam for an intro class where we can get away with questions such as “what is the debt equity ratio for Netflix?” The only short run answer might be to go back to in-person paper tests for intro classes. This is not a return to Luddism but a pragmatic tactic to ensure that beginners understand the key basic ideas in a field.

Some might argue this is an integrity issue. “With AI if the student has not learned it is less likely that they will pass an in-person paper exam,” as suggested by one faculty member. Another colleague suggests, “I would ask students a couple of unprepared questions in class that they need to answer without consulting ChatGPT. And I will set it up as a quiz for everyone to answer, so that nobody can hide. What we really need to do is to tighten up our testing. If we do that, it will keep students honest and make them invest in understanding and recalling the material instead of just submitting something from ChatGPT without a second look. I really think that we should move away from online exams with access to the internet.”

The testing problem extends to take home assignments as well. I asked ChatGPT 4 for answers to the famous Harvard Business School accounting case titled, “Kansas City Zephyr’s Baseball Club.” Although ChatGPT4 does not give out answers citing intellectual property concerns, it was helpful enough to provide detailed calculations, key learning points and frameworks that the student can use to analyze the case.

This unfortunately eliminates the process of discovery that students of the earlier generation had to struggle through to get to a reasonable answer. On top of that, software to detect AI generated responses from students is not very effective, in my experience. Incidentally, one must wonder about the viability of the Harvard Business School (HBS) Publishing’s current business model focused on creating cases and solutions that are re-used multiple times across schools all over the world. A colleague thoughtfully observes, “most case studies, and all public sources ones will have their content available publicly. So, cases just based on public information will lose relevance.”

Early electives

A colleague suggests, “for electives where we are trying to simulate real world behavior, I don’t think pen and paper tests are a reasonable solution. We should allow students to use ChatGPT and think about what we want to teach them given the existence of these tools. At that point, it is much more about understanding the unit economics and value drivers of a business, not teaching them how to conduct present value or CAPM (Capital Asset Pricing Model) calculations for a DCF (Discounted Cash Flows) model. ChatGPT O-1 models can even handle detailed algebraic steps for valuation math such as present value calculations and can independently conduct standard financial statement analysis calculations (e.g. ratio analysis). Once O-1 accepts excel files as an input, it is plausible that the model could complete entire valuation exercises on its own.”

Teaching students a BS (bull shit) filter

Perhaps the biggest purpose of education is to enable students to detect bull shit, that is pervasive everywhere we look at consultants’ presentations, sales pitches, lobbyists’ positions on policy questions, and of course via hallucinations and specious arguments that ChatGPT occasionally puts out. In early and advanced electives, we could devote more time to picking apart sales pitches or one-sided lobbyist positions.

To give you a concrete example, I asked ChatGPT4 to construct a bull case for Tesla given that its current valuation is around $1.4 trillion at the time of writing this piece. Then, I asked ChatGPT4 to present a case a short seller would make to adopt a bearish view on the stock. To my surprise, the answers to both these radically different positions were more or less the same but were presented with a positive or a negative spin. For instance, consider the following arguments that ChatGPT4 gave me for the bullish case:

(i) Dominance in the EV market

(ii) Superior margins using scale and technology

(iii) Expansion beyond automative

(iv) Full self-driving and software monetization

(v) Global manufacturing expansion

(vi) Energy transition leadership

(vii) Brand and ecosystem loyalty

(viii) AI and data leadership

(ix) Optionality and innovation

(x) Premium valuation for a growth leader

ChatGPT4 come up with the following arguments for the bearish case:

(i) Sky high multiples (con case for x above)

(ii) Over-reliance on future potential (con case for i-ix)

(iii) Increasing competition leading to price wars, legacy automakers catching up and regulatory equalization (con case for i, v, vii)

(iv) Challenges in full self-drive (con case for iv)

(v) Overestimating the energy business (con case for vi)

(vi) Margin compression risks (con case for i)

(vii) Macroeconomic and industry risks (con case for I, iii and x)

(viii) Lack of diversification (con case for iii)

(ix) Execution and scalability risks (con case for i-x)

(x) Market saturation and growth ceiling (con case for i)

(xi) Speculative valuation (con case for x).

While the list of cons is impressive, ChatGPT4 seems to have cleverly rewritten the same set of facts with a different twist. It is ultimately up to the human trader or the student to take a view. This contrast could be the basis for an interesting class discussion.

Another application that might be interesting is to wonder why chronic underperformers do not attract shareholder activists. I put out an annual doom list of S&P 1500 firms, whose returns are lower than risk free treasuries over the last 10 years. I asked ChatGPT to generate a deck that an activist would try and dislodge the board of directors of one of the firms on the doom list: Cheesecake Factory. The deck that ChatGPT was of astonishingly good quality. The structure of the deck was predictable:

· Slide 1: Introduction

· Slide 2: Underperformance overview

· Slide 3: Governance failures

· Slide 4: Operational inefficiencies

· Slide 5: Shareholder dissent

· Slide 6: Proposed board changes

· Slide 7: Strategic value creation

· Slide 8: Roadmap for transition

· Slide 9: Closing statement

Somewhat disconcerting, when I asked ChatGPT to produce an activist deck to dislodge the board at Amazon.com, it come back with the same deck with the same structure. The facts were not wrong, but ChatGPT would say that Amazon did not outperform Microsoft. That is true but Amazon’s stock is up 45% in 2024. This is where the analyst needs to step in and use judgement and context to understand whether ChatGPT’s statements make sense or not. It’s a great machine but we still need humans to temper the machine’s utterances.

Along similar lines, a faculty member I asked for comments observes, “one thing I ask my students is to put together a bunch of unrelated facts into a story. I show them numerous different correlations and time series and then ask them to use these correlations to explain the mechanism for why X has gone up for time and whether it will stay high or return to its long run mean. This requires a level of logical reasoning about how things fit together which ChatGPT doesn’t have yet and BS detection as well (since even if ChatGPT did have it, it could say things like “opening a McDonald’s in an emerging market causes its economic growth to increase”).”

In essence, we may need to use class time to probe, question, and think critically as opposed to enumerating facts and rules. The crux of the issue here, as a thoughtful colleague points out, is “how to make students care about originality in their answers? That is the big question because AI will give standard answers (right or wrong).”

Leverage AI – real time forecasting (advanced elective)

ChatGPT is generally good at providing answers to well-known HBS cases. It is, however, not as good as addressing forecasting problems. What if we re-frame teaching accounting into a forecasting exercise (as we do in my elective titled FAIME)? In FAIME that I took over from Emeritus Professor Trevor Harris, we teach students concepts such as revenues, labor, capacity, materials, capital structure, taxation, foreign currency and valuation, the accounting for each of these value drivers and then quickly move the conversation to the next level by asking them to forecast the sales, labor costs or capex spending for the following year. Forecasting forces the student to think like the CEO of the firm and hence engage with several practical questions that aid pedagogy such as

· How is the company going to increase revenue – via a greater number of products sold or via higher product prices?

· Why are we confident that the company can indeed raise prices or sell more product? Who are the firms’ competitors and why would they not respond?

· How big is the company’s addressable market? Where will growth come from? Newer versions of existing products? New products? Acquisitions?

· If we are going to increase revenue by acquiring other firms, who are these prospective firms?

ChatGPT gives me minimal to no answers to these questions that I posed for Home Depot as a case study. It regurgitates management guidance on revenue without articulating the path to hitting that number. Is it going to be via an acquisition? By raising product prices? By launching new products? By launching newer versions of existing products? By tapping new geographical segments?

Encouraging a discussion around these alternatives will sensitize the students to a set of tradeoffs that the CEO or a senior manager has to routinely confront in her day-to-day work.

Making assignments project based not exam based

Another excellent suggestion from a colleague is “courses should become more project based than exam based. More emphasis on doing than just learning.” This advice works well with my experience asking students to work on repeating the fundamentals way of thinking with a company of their choice, as detailed below in the following point. I don’t have a final exam in my class.

Making students comfortable with what they don’t know

One of the most valuable applications of the forecasting idea is to expose students to what they don’t know. AI is terrible at this skill as of now because ChatGPT gives students ready answers to their questions which in turn engenders overconfidence and creates blind-spots where students don’t even know what they don’t know.

In my course evaluations, at least one student will invariably say, “the class lectures did not prepare us for completing the assignments.” Before interpreting that comment, let me give you some context. In the class, I take the classic economic model of a firm (materials, labor, capacity, managerial talent) to a firm and apply that model to a typical income statement. Based on what I know about a particular firm, say Home Depot, do I fully understand how it earns revenues? Or the costs it incurs? Or the maintenance capex it needs to retain its product market share?

The student is expected to apply these ideas to a company of her choosing. So, the classroom guidance, by definition, has to be adapted to a different setting. Unless the student picks a close competitor to Home Depot (say Lowes), my way of solving the forecasting problem does not mechanically translate to that of say a semiconductor manufacturer such as TSMC (Taiwan Semi Conductor Company). The student will have to understand the key value drivers and the business environment for TSMC. That is, what do we expect TSMC’s revenues to be next year? Well, that depends on the demand for GPUs (Graphic Processing Units). What will GPU demand be next year? No one knows for sure, including AI. Exposing students to ambiguity and making them know what they don’t know is the real education and is reasonably AI proof.

But this will take educating our MBA students of the purpose behind the degree. A colleague rightfully points out, “many MBA students are still in undergrad mode where they expect instructors to teach them rather than expand their knowledge through critical thinking. The question of “how can we do cases on topics that we haven’t yet learnt” is getting louder every day.”

We may have to reconsider grading and penalties as well. Have we defaulted to a model where we are too nice to our students without holding them accountable? Can we de-emphasize student popularity ratings when evaluating instructors? Or at least value rigor in the classroom as much as student popularity?

Increase the practicum component in business education

Another colleague thoughtfully points out, “the issue is do we know the right questions to ask. One of my students has an assignment in an audit class where they have to address a client issue using AI without sharing client data. It turns out that asking the right question is hard.” Conducting an effective audit requires knowledge of accounting rules, audit standards, and most important, a detailed understanding of the business model, the risks and opportunities in the business. This goes back to the importance of appreciating context. Context, judgement and problem solving are perhaps best learned by experience. Should we increase the practicum component of learning in business schools? Teach basic principles and then make students spend six months interning with a business before bringing them back to teach them applications of the basic ideas they learned before the internship?

The faculty hiring model needs to change

As suggested by one colleague, “a moat for business schools might be to recruit teachers that are really good at case discussion and applications.” One response in the short run is to give students ChatGPT’s answers to a case and ask students to improve on those answers. This necessitates hiring faculty who are skilled at leading discussions to the alternatives suggested by ChatGPT. The instructor could bring in applications of concepts to related but different real-world situations.

AI may be good at suggesting alternatives as well but is nowhere close to replicating constraints that managers face in implementing solutions. These constraints can be myriad and range from staffing problems, supply chain issues, difficulties in getting regulatory approval, uncooperative employees, financial budget constraints, managing a difficult boss, severe time pressure, uncertain behavior of competition and the volatility added by the macroeconomic environment on the feasibility of a proposed solution.

If managerial constraints, problem solving and implementation are the source of faculty value add, can we afford to keep hiring newly minted PhDs many of whom don’t have even one day of work experience in the real world? Increasingly, the talent pool for PhD applicants comes from pre-docs. Pre-docs are bright young students who want to become professors. They are recruited by many top universities to help assistant professors run computer programs for their research projects. Some help collecting data for assistant professors. I suspect these pre-docs are better off working in companies or at a regulator or a think tank for two years instead of assisting the production of research.

Emphasizing the need for work experience in both PhD candidates and new faculty members will hopefully lead to research that is responsive to stakeholders who fund our enterprise.

Will this lead deans to nudge faculty towards applied research?

One of the positive consequences of hiring faculty with some work experience is that the nature of research questions they ask will hopefully evolve to address problems that practitioners and regulators are grappling with. Pre-docs and new faculty members with no meaningful real-world experience tend to, harshly put, chase answers to research questions that no one has. Increasingly irrelevant research only adds to difficulties in fundraising for deans. Unresponsive research also attracts the ire of legislators at state funded schools and serves no one except to function as a score card for deciding who to give tenure to.

A few things that ChatGPT can help with

In closing, ChatGPT is very useful with helping instructors improve slides, insert activities during the class, improve the class plan, summarize student feedback and assist instructors in getting feedback on their teaching by uploading class recordings, and even grading assignments with a grading rubric.

A colleague notes, “Chat GPT and other similar apps can be excellent learning buddies. Many companies are creating learning co-pilots (Coursera: Coach, Khan Academy – Khanamigo; many apps for sales training etc.)”

I found Mollick (2023) and Levy and Perez Albertos (2024) to be useful resources in this endeavor. Mollick is a good introduction to AI from a user perspective. The Levy and Perez Albertos book is more of a step-by-step guide in assisting instructors leverage ChatGPT.

Questions related to the value we add as instructors, beyond AI, have just begun. A colleague thoughtfully points out, “I believe we are still at the beginning of the AI impact which will require a deeper reevaluation of teaching that should have happened a long time ago.”

Another colleague frames the upcoming challenges well: “How best do we serve students in a reasonable way at reasonable cost? What do we add to AI? Should the strategy differ between top universities that focus on exclusivity relative to universities that focus on expanding access of the college experience to students such as Arizona State University. How do universities prepare our students for this new world?”

AI is here to stay, and we need to keep experimenting and sharing our experiences to shape the future of business school education.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *