I wanted to try using Chat GPT to write code for me based on a simple starting prompt. I had already successfully done that for a very simple composition, now I wanted to try to get Chat GPT to do as much as the work as possible. The idea was to have one instance working as a coder – we’ll call it Coder, and the other reviewing its output – we’ll call it Assistant.
The purple background text is my prompts, blue background text is Chat GPT’s answers.
Setup
I started by feeding Coder GPT a prompt describing our framework and platform, along with example code and type definition for a VisComp ( visual composition, our main building blocks, similar to Unity game objects ).
I then gave it the following prompt:
Write the typedef and the code for a VisComp that:
– creates a variable amount of objects using its ‘materials’ and ‘geometries’ inputs
– positions the objects in space on the surface of a sphere of variable radius
– animates each object along its own curve, which is procedurally generated and loops back onto itself
The code it came up with seemed OK, but that wasn’t for to me to judge.
Here is the type definition and code for the VisComp as per your requirements:
Example type definition:
typescriptCopy codetype SphereSurfaceVisComp = { materials: Material[]; geometries: Geometry[]; count: int; radius: float; /** * @meta {"min":0,"max":1} */ curveTightness: float; };Example VisComp code:
javascriptCopy codeclass SphereSurfaceVisComp extends VisComp { start() { super.start(); this.generateObjects(); } generateObjects() { if (this.objects) { for (let obj of this.objects) { obj.material.dispose(); obj.geometry.dispose(); this.cont3D.remove(obj); } } this.objects = []; const materials = this.inputs.get('materials'); const geometries = this.inputs.get('geometries'); const count = this.inputs.getObject('count'); const radius = this.inputs.get('radius'); const curveTightness = this.inputs.get('curveTightness'); for (let i = 0; i < count; i++) { const mat = materials[i % materials.length]; const geom = geometries[i % geometries.length]; const obj = new THREE.Mesh(geom, mat); const phi = Math.random() * Math.PI * 2; const theta = Math.acos(2 * Math.random() - 1); obj.position.set( radius * Math.sin(theta) * Math.cos(phi), radius * Math.sin(theta) * Math.sin(phi), radius * Math.cos(theta) ); obj.curveTightness = curveTightness; obj.curveOffset = Math.random() * Math.PI * 2; obj.curveSpeed = Math.random() * 0.01 + 0.005; this.cont3D.add(obj); this.objects.push(obj); } } inputChanged(event) { switch (event.type) { case 'materials': case 'geometries': case 'radius': case 'curveTightness': this.generateObjects(); } } dispose() { if (this.objects) { for (let obj of this.objects) { obj.material.dispose(); obj.geometry.dispose(); this.cont3D.remove(obj); } } super.dispose(); } update(dt) { super.update(dt); for (let obj of this.objects) { const t = (Date.now() * obj.curveSpeed + obj.curveOffset) % (Math.PI * 2); const newPosition = new THREE.Vector3( obj.position.x + Math.cos(t) * obj.curveTightness, obj.position.y + Math.sin(t) * obj.curveTightness, obj.position.z + Math.cos(t) * obj.curveTightness ); newPosition.normalize().multiplyScalar(this.inputs.get('radius')); obj.position.copy(newPosition); } } }This VisComp creates a specified number of objects using the provided materials and geometries, positions them on the surface of a sphere with a variable radius, and animates each object along its own looping, procedurally generated curve.
I started another Chat GPT chat for our Assistant:
You are Assistant GPT. You will be working in tandem with another instance of Chat GPT, which will be writing the code from my initial prompt. You will review its output and suggest how to improve it, and also provide it with any errors that happen when running the code.
Here is the starting prompt it has received:
[ here I pasted the prompt I gave Coder ]
You will now review its code and suggest improvements.
Here is the typedef and code it has generated:
[ here i pasted the typedef and code ]
I got a decent answer, with a few suggestions on improvements. Assistant also rewrote the code for me, however the code was truncated.
The generated code appears to be a VisComp class that generates objects on the surface of a sphere with a specified radius, materials, and geometries. The objects move along curved paths based on the curveTightness parameter. Overall, the code is well-structured and mostly correct. However, there are a few improvements and fixes that can be made:
- In the
generateObjects()method, you should usethis.inputs.getObject()instead ofthis.inputs.get()for the count parameter. This is becausegetObject()is used for accessing parameters according to the initial prompt.- In the
inputChanged()method, you should regenerate the objects only if there is a change in the materials, geometries, or count. The radius and curveTightness changes should be handled separately, as they don’t require the complete regeneration of objects.Here is the updated code:
javascriptCopy codeclass SphereSurfaceVisComp extends VisComp { start() { super.start(); this.generateObjects(); } generateObjects() { if (this.objects) { for (let obj of this.objects) { obj.material.dispose(); obj.geometry.dispose(); this.cont3D.remove(obj); } } this.objects = []; const materials = this.inputs.get('materials'); const geometries = this.inputs.get('geometries'); const count = this.inputs.getObject('count'); const radius = this.inputs.get('radius'); const curveTightness = this.inputs.get('curveTightness'); for (let i = 0; i < count; i++) { const mat = materials[i % materials.length]; const geom = geometries[i % geometries.length]; const obj = new THREE.Mesh(geom, mat); const phi = Math.random() * Math.PI * 2; const theta = Math.acos(2 * Math.random() - 1); obj.position.set( radius * Math.sin(theta) * Math.cos(phi), radius * Math.sin(theta) * Math.sin(phi), radius * Math.cos(theta) ); obj.curveTightness = curveTightness; obj.curveOffset = Math.random() * Math.PI * 2; obj.curveSpeed = Math.random() * 0.01 + 0.005; this.cont3D.add(obj); this.objects.push(obj); } } inputChanged(event) { switch (event.type) { case 'materials': case 'geometries': case 'count': this.generateObjects(); break; case 'radius': case 'curveTightness': this.updateObjectParameters(); break; } } updateObjectParameters() { const radius = this.inputs.get('radius'); const curveTightness = this.inputs.get('curveTightness'); for (let obj of this.objects) { obj.curveTightness = curveTightness; } } dispose() { if (this.objects) { for (let obj of this.objects) { obj.material.dispose(); obj.geometry.dispose(); this.cont3D.remove(obj); } } super.dispose(); } update(dt) { super.update(dt); const radius = this.inputs.get('radius'); for (let obj of this.objects) { const t = (Date.now() * obj.curveSpeed + obj.curveOffset) % (Math.PI * 2); const newPosition = new THREE.Vector3( obj.position.x + Math.cos(t) * obj.curveTightness, obj.position.y + Math.sin(t) * obj.curveTightness, obj.position
So I just pasted Assistant’s output back into the chat with Coder without any extra prompting. And Coder finished the code from where Assistant had stopped.
.z + Math.cos(t) * obj.curveTightness ); newPosition.normalize().multiplyScalar(radius); obj.position.copy(newPosition); } } }
This updated VisComp class now uses
this.inputs.getObject()for accessing thecountparameter and handles changes to theradiusandcurveTightnessparameters separately, without regenerating the objects entirely. Instead, theupdateObjectParameters()method is called, which updates the necessary parameters for the existing objects in the scene.
From what I’ve seen, when Chat GPT completes code snippets, it loses the code highlighting, which is a bit inconvenient.
I pasted the code and typedef into our CMS’s editor app, selected a few geometries and materials, and arbitrary values for the count and curveTightness inputs.
The code didn;t run, as an error was thrown from somewhere inside the THREE.js library. I checked the GPT generated code and easily figured out where the error was coming from. But I wanted to see if GPT would be able to identify it.
ee.min.js:6 Uncaught (in promise) TypeError: Cannot convert undefined or null to object at Function.keys (<anonymous>) at Ki.updateMorphTargets (three.min.js:6:106109) at new Ki (three.min.js:6:105732) at SphereSurfaceVisComp.generateObjects (9d8e385b-ca6d-45aa-8ed2-674e27798ee9:27:19) at SphereSurfaceVisComp.start (9d8e385b-ca6d-45aa-8ed2-674e27798ee9:4:10) at SceneSingle.js:332:20 at Generator.next (<anonymous>) at s (asyncToGenerator.js:3:20) at o (asyncToGenerator.js:25:9) at asyncToGenerator.js:32:7 u
Its solution didn’t work, but it was an OK try.
It seems that you’re encountering an error related to the THREE.js library, specifically in the
updateMorphTargetsfunction. This error typically occurs when the geometry is undefined or null. To resolve this issue, please ensure that you have properly loaded and assigned the geometries before creating the mesh in thegenerateObjects()method.Here’s the updated
generateObjects()method with added checks for valid geometries and materials:javascriptCopy codegenerateObjects() { if (this.objects) { for (let obj of this.objects) { obj.material.dispose(); obj.geometry.dispose(); this.cont3D.remove(obj); } } this.objects = []; const materials = this.inputs.get('materials'); const geometries = this.inputs.get('geometries'); const count = this.inputs.getObject('count'); const radius = this.inputs.get('radius'); const curveTightness = this.inputs.get('curveTightness'); if (!materials || !geometries || materials.length === 0 || geometries.length === 0) { return; } for (let i = 0; i < count; i++) { const mat = materials[i % materials.length]; const geom = geometries[i % geometries.length]; if (!mat || !geom) { continue; } const obj = new THREE.Mesh(geom, mat); const phi = Math.random() * Math.PI * 2; const theta = Math.acos(2 * Math.random() - 1); obj.position.set( radius * Math.sin(theta) * Math.cos(phi), radius * Math.sin(theta) * Math.sin(phi), radius * Math.cos(theta) ); obj.curveTightness = curveTightness; obj.curveOffset = Math.random() * Math.PI * 2; obj.curveSpeed = Math.random() * 0.01 + 0.005; this.cont3D.add(obj); this.objects.push(obj); } }This updated
generateObjects()method checks if thematerialsandgeometriesarrays are not null and not empty, and checks if the individualmatandgeomelements are not null before creating the mesh. If the geometry or material is not valid, the loop will continue without creating the object. Make sure your assets are properly loaded before creating the VisComp.
The error in the code was caused by my initial prompt, which used the wrong XRC framework method, so GPT was working from an incorrect example…
I informed GPT about it and told it I would fix the code myself.
Having fixed the code, the composition was now running, and looking fairly decent. Have a look:
There is just one issue: when I increase the radius like at the end of the video, the objects all end up on the same plane which looks a bit boring. Looking at the code, I’m not sure why this is happening. Let’s see if Coder can figure it out faster than me.
Let’s prompt it:
When I change the radius parameter, in some cases all the objects end up positioned on a circle instead of a sphere. Can you fix that please?
Coder seems confident in its response, which absolutely doesn’t fix it.
I apologize for the confusion. The issue you’re experiencing is caused by the way the objects are updated in the
update()method. The current implementation does not account for the change in the radius when updating the position of the objects.Here’s the modified
update()method that should resolve the issue:javascriptCopy codeupdate(dt) { super.update(dt); const radius = this.inputs.get('radius'); for (let obj of this.objects) { const t = (Date.now() * obj.curveSpeed + obj.curveOffset) % (Math.PI * 2); const newPosition = obj.position.clone(); newPosition.add(new THREE.Vector3( Math.cos(t) * obj.curveTightness, Math.sin(t) * obj.curveTightness, Math.cos(t) * obj.curveTightness )); newPosition.normalize().multiplyScalar(radius); obj.position.copy(newPosition); } }In this updated
update()method, we first clone the object’s current position and then add the new movement vector to it. This approach ensures that the object remains on the sphere’s surface even when the radius is changed.
Let’s try again:
The issue is still there.
As usual, Coder apologises for the confusion, before offering a solution that sounds OK, but actually isn’t.
I apologize for the confusion. I realize now that the previous approach still had some issues. I have re-evaluated the problem and created a new solution that should resolve it. This time, I am using spherical coordinates to update the object’s position according to the new radius while maintaining their relative distribution on the sphere.
Here’s the modified
update()method:javascriptCopy codeupdate(dt) { super.update(dt); const radius = this.inputs.get('radius'); for (let obj of this.objects) { const t = (Date.now() * obj.curveSpeed + obj.curveOffset) % (Math.PI * 2); const spherical = new THREE.Spherical(); spherical.setFromVector3(obj.position); spherical.radius = radius; spherical.phi += Math.cos(t) * obj.curveTightness * dt; spherical.theta += Math.sin(t) * obj.curveTightness * dt; obj.position.setFromSpherical(spherical); } }In this updated
update()method, we convert the object’s position to spherical coordinates usingTHREE.Spherical. We update theradius,phi, andthetavalues of the spherical coordinates and then convert them back to Cartesian coordinates, which are applied to the object’s position. This ensures that the objects remain on the sphere’s surface when the radius is changed, and they move along their own curve.
Still not fixed. When I change the radius, the objects now are all in the same position.
As it starts typing, I get the feeling it’s not going to manage it. It starts by rewriting the update() function so that it looks like one of the previous incorrect ones. But surprise! It does identify the problem and realises it also needs to update the generateObjects function. The issue is now solved.
I apologize for the confusion in my previous responses. I understand the issue now and have revised the solution accordingly. The updated approach maintains each object’s relative position on the sphere when the radius is changed.
Here’s the modified
update()method:javascriptCopy codeupdate(dt) { super.update(dt); const radius = this.inputs.get('radius'); for (let obj of this.objects) { const t = (Date.now() * obj.curveSpeed + obj.curveOffset) % (Math.PI * 2); const newPosition = obj.initialPosition.clone(); newPosition.add(new THREE.Vector3( Math.cos(t) * obj.curveTightness, Math.sin(t) * obj.curveTightness, Math.cos(t) * obj.curveTightness )); newPosition.normalize().multiplyScalar(radius); obj.position.copy(newPosition); } }You should also store the initial position of each object when you generate them in the
generateObjects()method:javascriptCopy code// ... const obj = new THREE.Mesh(geom, mat); const phi = Math.random() * Math.PI * 2; const theta = Math.acos(2 * Math.random() - 1); obj.position.set( radius * Math.sin(theta) * Math.cos(phi), radius * Math.sin(theta) * Math.sin(phi), radius * Math.cos(theta) ); obj.initialPosition = obj.position.clone(); // ...In this updated
update()method, we use the object’s initial position as a base and then add the new movement vector to it. This approach ensures that the object remains on the sphere’s surface even when the radius is changed. By storing and using the initial position of each object, we avoid the issue of all objects converging to the same position.
Chat GPT has successfully coded my brief. It doesn’t look great though, and I think that’s mainly due to my selection of materials and geometries – I picked the first things at hand.
I can see one thing we can add to the code to make it look better: some rotation from the objects would be nice. The object’s speeds could also vary through time.
And in terms of geometries / materials, I could do with some kind of concept to make it more interesting. I can see this having a good effect if we were to use shader materials for the objects, and the object’s speed and / or position had an effect on the shaders, eg faster objects tending towards red and slower ones towards blue ( some kind of budget Dopppler effect ).
But let’s see if Chat GPT can offer any creative input. In order to make sure Coder GPT stays focused on the task (we have to be mindful of its token limitations), I’ll ask our Assistant to give some suggestions
The coder instance has finished coding this. Can you suggest ways in which to make this more interesting visually, both in terms of additional features in the code, and selection of geometry, materials, postprocessing and background?
Surprisingly, Assistant GPT doesn’t tell pull out the standard ‘As an AI language model I can’t be creative’ spiel. It gives me a list of things, which aren’t that useful – I want it to be more specific and tell me exactly what I need to use. I can tell by myself that using different geometries and materials should make it look better, what I want to know is what geometries and materials I should use.
Certainly! Here are some suggestions to make the visuals more interesting:
- Geometry and materials:Instead of using simple geometries, you can use more complex and interesting geometries like a TorusKnotGeometry or an IcosahedronGeometry. You can also mix different types of geometries for a more diverse visual composition.For materials, you can use a variety of materials like MeshStandardMaterial, MeshPhongMaterial, or MeshToonMaterial. Consider using textures and environment maps to create more realistic and visually appealing materials. Also, consider using vertex and fragment shaders to create custom materials.
- Postprocessing:Apply postprocessing effects to enhance the visuals. Some interesting postprocessing effects available in THREE.js are:
- BloomEffect
- DepthOfFieldEffect
- FilmEffect
- VignetteEffect
- ChromaticAberrationEffect
- GodRaysEffect
- Background:Instead of using a solid color background, consider using a Skybox or a dynamically generated procedural background using Perlin noise or a custom fragment shader. This will make the scene feel more immersive and engaging.
- Additional features in the code:
- Add interactivity: Make the objects respond to user input, such as mouse/touch events, by changing their position, rotation, or scale.
- Animate properties: Animate the colors, materials, or geometries of the objects to create a more dynamic scene.
- Add lights: Include various types of lights like PointLight, DirectionalLight, or SpotLight to create a more visually interesting scene. Consider using shadows for added depth.
- Use a camera controller: Implement a camera controller like OrbitControls or FlyControls to allow users to navigate the scene and explore the objects from different perspectives.
- Advanced features:
- Physically based rendering (PBR): Use PBR materials like MeshStandardMaterial or MeshPhysicalMaterial to achieve more realistic lighting and reflections.
- Reflections and refractions: Add environment maps to create reflective or refractive materials, making the scene more visually rich.
- Particles: Add a particle system to create effects like smoke, fire, or sparkles, which can add more life to the scene.
By combining these suggestions, you can create a more visually engaging and interesting scene in the XRC framework.
using your suggestions, can you select some arbitrary options for this composition?
Certainly! Here’s a combination of options to create an interesting visual composition:
- Geometry and materials:Geometries:
Materials:
- THREE.TorusKnotGeometry
- THREE.IcosahedronGeometry
- THREE.BoxGeometry
- MeshStandardMaterial with a texture map and roughness/metalness maps
- MeshPhongMaterial with a texture map and specular map
- MeshToonMaterial with a gradient map
- Postprocessing:Apply the following postprocessing effects:
- BloomEffect: to create a glow around brighter areas
- VignetteEffect: to softly darken the corners of the screen
- ChromaticAberrationEffect: to simulate the lens distortion
- Background:Use a Skybox with 6 textures to create an environment map that represents a space scene or a beautiful landscape.
- Additional features in the code:
- Interactivity: Change the rotation of objects when the user clicks on them.
- Animate properties: Animate the colors or emissive properties of the materials over time.
- Add lights: Add a PointLight with a slight color tint and a DirectionalLight for general scene illumination. Enable shadows for the DirectionalLight.
- Use a camera controller: Implement OrbitControls to allow users to navigate around the scene.
- Advanced features:
- Physically based rendering (PBR): Use MeshStandardMaterial for the TorusKnotGeometry and IcosahedronGeometry objects.
- Reflections: Add an environment map to the MeshStandardMaterial for the TorusKnotGeometry object to create a reflective surface.
- Particles: Add a particle system around the objects to simulate floating dust particles in the scene.
By implementing these options, you’ll create a visually engaging composition in the XRC framework with an interesting combination of geometries, materials, effects, and features.
Some of these are OK, the postprocessing and additional code features shouldn;t look too bad. Not too keen on the materials and background selection.
As Chat GPT does;t have access to our CMS (for now..), I can’t get make it decide which assets to use.
Since I’m planning on being as lazy as possible, in part 2 of this we’ll rework the code to make it more procedural – getting rid of the geometry and material inputs. Chat GPT will generate them by code, and maybe also add us an abstract procedural skybox. I want AI to write my code and make it look good, so I can have a sandwich and a beer.