Skip to content

Metamodelling with ChatGPT - Episode 2

To help ChatGPT produce syntactically-correct metamodels, I thought I'd give it an example metamodel in Emfatic. Therefore, in this experiment, together with the description of the metamodel, I provided the psl example metamodel from the Epsilon Playground:

Please write the metamodel of a domain-specific language 
for modelling state machines, in the Emfatic notation. 
Only show code in your responses. An example metamodel 
in the Emfatic notation follows.

@namespace(uri="psl", prefix="")
package psl;

class Project {
    attr String title;
    attr String description;
    val Task[*] tasks;
    val Person[*] people;
}

class Task {
    attr String title;
    attr int start;
    attr int duration;
    val Effort[*] effort;
}

class Person {
    attr String name;
}

class Effort {
    ref Person person;
    attr int percentage = 100;
}

Much better! With this example, ChatGPT can produce a correct Emfatic metamodel right off the bat!

@namespace(uri="state_machine", prefix="")
package state_machine;

class StateMachine {
    attr String name;
    val State[] states;
    val Transition[] transitions;
}

class State {
    attr String name;
    val Transition[*] transitions;
}

class Transition {
    ref State source;
    ref State target;
    attr String event;
    attr String action;
}

Hierarchical State Machines

The next step is to ask ChatGPT to adapt the metamodel so that it supports hierarchical state machines.

Add support for hierarchical states.

This produces the expected result and ChatGPT even adds a comment to highlight the added containment (val) reference.

@namespace(uri="state_machine", prefix="")
package state_machine;

class StateMachine {
    attr String name;
    val State[] states;
    val Transition[] transitions;
}

class State {
    attr String name;
    val Transition[] transitions;
    val State[] subStates; // added support for sub-states
}

class Transition {
    ref State source;
    ref State target;
    attr String event;
    attr String action;
}

But does ChatGPT really understand the difference between containment (val) and non-containment (ref) references? It turns out that it doesn't.

In the state machine metamodel, why is substates marked as val while source is marked as ref?

This prompt produces the following (incorrect) response.

The val keyword in the state machine metamodel indicates 
that a State can have multiple subStates, which are defined 
as a list or array of State objects. In contrast, the ref 
keyword in the Transition class indicates that a Transition 
object has a single source and target state, which are 
references to State objects.

In general, the val keyword is used to indicate that a class 
has a many-valued attribute or reference, while the ref 
keyword is used to indicate that a class has a single-valued 
attribute or reference.

Model Validation

If ChatGPT does such a good job at figuring out the Emfatic notation from an example, perhaps it can generate validation constraints too? To try this, I provided it with the following prompt, which includes some example Epsilon Validation Language constraints, also from the Epsilon Playground.

Write a constraint in the Epsilon Validation Language, 
stating that a state cannot be its own parent. 
Some constraints in the Epsilon Validation Language follow.

// For every task
context Task {

    // Check that the start month is > 0
    constraint ValidStart {
        check: self.start > 0
        message: "Start month must be positive"
    }

    // Check that the duration is > 0
    constraint ValidDuration {
        check: self.duration > 0
        message: "Duration must be positive"
    }

}

// For every person
context Person {

    // Check that the person is involved
    // in at least one task
    critique IsInvolvedInATask {
        check: Effort.all.
            exists(e|e.person = self)

        message: self.name + 
            " is not involved in the project"
    }

}

Much to my surprise, ChatCPT produced a syntactically and semantically (!) correct EVL constraint.

// For every state
context State {

  // Check that the state is not its own parent
  constraint ValidParent {
    check: self.subStates->forAll(s|s <> self)
    message: "A state cannot be its own parent"
  }

}

Model-to-Text Transformation

Can ChatGPT write model-to-text transformations too? Like above, I prompted it to write a small model-to-text transformation in the Epsilon Generation Language and provided an existing transformation from the Epsilon Playground as an example.

Write a model-to-text transformation in the Epsilon 
Generation Language that produces a HTML table 
with one row for each transition. 
The row should contain the event and the action 
of the transition. An example model-to-text 
transformation in the Epsilon Generation Language follows.

<html>
<head>
[*Bootstrap CSS*]
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.6.0/dist/css/bootstrap.min.css">
</head>

<table class="table table-striped">
[*Generate a header row with the names of all persons*]
<tr>
    <th></th>
    [%for (p in Person.all){%]
    <th>[%=p.name%]</th>
    [%}%]
</tr>
[*Generate one row per task with the participants' effort*]
[%for (t in Task.all){%]
<tr>
    <th>[%=t.title%]</th>
    [%for (p in Person.all){%]
    <td>[%=t.effort.selectOne(e|e.person = p)?.percentage ?: 0%]%</td>
    [%}%]
</tr>
[%}%]
</table>
</html>

ChatGPT can write model-to-text transformations too! The transformation it produced below is almost correct. Its only issue is that it also produces an unwanted table row for all states in the model.

<html>
<head>
[*Bootstrap CSS*]
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.6.0/dist/css/bootstrap.min.css">
</head>
<body>
<table class="table table-striped">
[*Generate a header row with the names of all states*]
<tr>
    <th></th>
    [%for (s in State.all){%]
    <th>[%=s.name%]</th>
    [%}%]
</tr>
[*Generate one row per transition with the event and action*]
[%for (t in Transition.all){%]
<tr>
    <th>[%=t.event%]</th>
    <td>[%=t.action%]</td>
</tr>
[%}%]
</table>
</body>
</html>

Can ChatGPT remove this row with the following prompt?

There is no need for a header row with the names of all states.

Of course it can, and it produces the corrected transformation below.

<html>
<head>
[*Bootstrap CSS*]
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.6.0/dist/css/bootstrap.min.css">
</head>
<body>
<table class="table table-striped">
[*Generate one row per transition with the event and action*]
[%for (t in Transition.all){%]
<tr>
    <th>[%=t.event%]</th>
    <td>[%=t.action%]</td>
</tr>
[%}%]
</table>
</body>
</html>

Last update: December 4, 2022