Improving student writing abilities with AI support

Table of Contents

Print to PDF

What is Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL)?

So, where can AI fit into education? What are the key components and options that AI, ML and DL offer, and what do educators need to know before embracing the possibilities? And, what do all these acronyms mean?

AI is a broad term, and one that has become a marketing catchphrase used by many vendors supplying programs that automatically do something. Does a formative assessment program use AI when it automatically assigns students another 10 ‘tailored’ questions? No. When a computer reads a text and then instantly creates a full range of multiple-choice questions for formative assessment, is that AI? Yes! 

AI, ML and DL capabilities offer teachers the support and innovation to reach transformative levels of learning redefinition as defined in the SAMR framework. They offer new ways to move past applications that deliver simple substitution and augmentation processes. The image below depicts the family of technologies that comprise AI. Emerging from the 1950’s, AI is not new. However, what can be achieved with AI is new and it opens many new transformative pathways, not just for computers but for humans as well.

 

Source: https://blogs.oracle.com/bigdata/difference-ai-machine-learning-deep-learning

 

AI is built on a series of mathematically based components that combine large numbers of data sets with fast, iterative processing and intelligent algorithms. This allows AI software to learn automatically from patterns or features in the data sets. Neural network models are auto-constructed to make repeatable and accurate decisions on new data passed through the model. The whole premise is to build an electronic neural capability to mimic human intelligence accurately in milliseconds.

ML processes data using algorithmic techniques. These routines are programmatically designed by humans as repeatable calculations. Stanford University says that ML “is the science of getting computers to act without being explicitly programmed”. Many ML routines are reliable, widely available and are being developed all the time, especially in the field of NLP (Natural Language Processing). 

NLP is a whole world of mathematics that is getting very clever at vectorising and classifying text. When each word becomes a unique number, computers go crazy with the computational possibilities and analysis that can be done. Basically, computers understand over 150 parts of speech tags allowing them to track language usage and dependencies just like spell checking and grammar systems do. Tracking language context remains the big race and this is where we need more general intelligence built-in. For now, NLP forms the core of most ML based text analysis and calculations.

ML by itself can produce amazing insights and analysis and is able to infer decisions. There are many examples of this capability in play right now. Calculating the lexical density or diversity of a piece of text is a good example of an ML – NLP algorithm. Text goes in, density and diversity ratings come out. This can then be used as part of a decision process or to trigger a response or alert. Some of the oldest and still used auto-essay grading systems use combinations of supervised ML based calculations to grade texts. These methods don’t engage AI decision intelligence but rather execute a range of calculations when combined via statistical techniques, best-fit and experience, producing a result that closely mirrors a human grader.

The advancement of ML underpins the excitement and possibilities Deep Learning (DL), opens into the future. DL builds ‘neural networks’, modelled loosely on the neural capabilities of our human brain, using high volumes of accurate training data to deliver high accuracy results across image classification, speech recognition and text processing like translation, text creation and grading. The true advances DL neural networks offer is the ability to self-learn, building its own neural model with the ability to extract and translate features from supplied data sets without any coding or rules. A network that self-builds intelligence as it aims for a targeted outcome offers truly exciting applications. However, there remains one fundamental weakness in all of this excitement – the source data needed to train the deep learning models and the target outcomes the model is trying to replicate. Rarely is there a problem with maths.

Data, data everywhere, and not a drop to drink!

Many companies and schools are awash with data, most of it naturally disjointed, and certainly not in a state fit for DL model training. In fact, many schools remain fixated on warehousing and reporting on the past (a 1980’s concept), rather than using well structured data for ML based supervised and unsupervised learning. This is when machines find something specific or anything of interest in data sets that until now remain unseen. The whole premise and opportunity of AI is to use data and computer power to help build models that use data to think forwards. 

When working with AI and DL, the baseline to success needs large volumes of continuous and consistent data from complete data sets. There are two core data requirements. 

Firstly there must be large volumes of representative base data, and secondly, there must be human decision results for the data. As DL is attempting to mimic accurate human decision making, the source data and the result data (be it an essay score or loan approval Y/N for example) needs to be aligned for analysis. The DL challenge is to use the two data sets to build a neural model able to replicate the logic applied by humans to reproduce decisions as accurately and consistently as possible for any new data. 

Logically the test after building an essay grading capability is – “Did the DL model score the essay according to the criteria and standards contained in the rubric, close to the score of a human teacher?” This begs the next question, “Which rubric and teacher?”.

There are only two ways to get large reliable data sets on which neural models can be trained. The first is to build bias-free data sources in house. The second is to work with trusted models that have been trained reliably. Either way, it is a big effort to build and maintain data sets that are bias-free, representative of an agreed truth and consistent enough to create accurate DL models. It is even more important to keep data current. 

Remember Edgar Allan Poe’s advice? AI and DL actually start from a very disadvantaged position as they can only see what they ‘hear’ from human data sources and one should never believe any of what they hear! Whilst AI, ML and DL attempt to mimic human decision-making, humans remain responsible for the accuracy and supply of training data sources.  

Banks have been known to reassess past loan approvals against current compliance and policy frameworks to recalibrate past loan decisions into forward capable decision support. The same logic could apply to essay grades from past essay texts realigned to more applicable rubrics and less human variations in scores. Beware of using too much historical data that can perpetuate biased decisions of the past into DL models of the future. The net effect could be a very elegant and yet outdated neural network.