<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="./atom.xsl" type="text/xsl"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<title>trist.am</title>
	<subtitle>Software and Philosophy. Or something like that</subtitle>
	<link href="https://www.trist.am/atom.xml" rel="self" type="application/atom+xml"/>
    <link href="https://www.trist.am"/>
	<generator uri="https://www.getzola.org/">Zola</generator>
	<updated>2024-04-27T00:00:00+00:00</updated>
	<id>https://www.trist.am/atom.xml</id>
	<entry xml:lang="en">
		<title>Atmosphere Rendering</title>
        <subtitle>How to pretty-up your planet rendering</subtitle>
		<published>2024-04-27T00:00:00+00:00</published>
		<updated>2024-04-27T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2024/atmosphere-rendering/" type="text/html"/>
		<id>https://www.trist.am/blog/2024/atmosphere-rendering/</id>
		<content type="html"> <![CDATA[ <p>If you start researching rendering realistic atmospheres for planets, the rabbit hole goes very deep. Having been on this particular dive a few times now, I figured it's time to actually publish some notes - partly for my own later reference, but hopefully also of use to others.</p>
<h1 id="tl-dr">TL;DR</h1>
<p>Right off the bat, if you just want to dive right in with a state-of-the-art atmosphere rendering technique, then go with <a href="https://doi.org/10.1111/cgf.14050">A Scalable and Production Ready Sky and Atmosphere Rendering Technique</a> by Sébastien Hillaire. It is the same atmosphere rendering technique used by Unreal Engine, it is fast, scalable, has less edge-cases than previous approaches, and the author has made <a href="https://github.com/sebh/UnrealEngineSkyAtmosphere">reference source code available</a>.</p>
<p>On the other hand, if you have more specialised needs, or just want to go deeper on atmosphere rendering, then stick around...</p>
<span id="continue-reading"></span><h1 id="how-we-got-here">How we got here</h1>
<p>There is a long history of realtime sky rendering, but the vast majority have been concerned with rendering the sky from (or at least near) the ground. I'm primarily interested in planetary atmosphere rendering - we want to be able to fly to space and back, with a seamless atmosphere throughout.</p>
<p>For planetary atmospheres, it all starts with Nishita et al's 1993 classic, <a href="https://dl.acm.org/doi/pdf/10.1145/166117.166140">Display of The Earth Taking into Account Atmospheric Scattering</a>. This is going to lay the foundation for the all the techniques to come, so it's worth the read even if just for the basic concepts of atmospheric scattering and optical depth. It is worth noting that at the time there was no way to run these calculations in realtime.</p>
<p>The big jump to realtime doesn't come until 2005, when GPU Gems 2 is published. Sean O'Neil contributes <a href="https://developer.nvidia.com/gpugems/gpugems2/part-ii-shading-lighting-and-shadows/chapter-16-accurate-atmospheric-scattering">Chapter 16: Accurate Atmospheric Scattering</a>, in which he uses a 2D lookup table to accelerate Nishita's single scattering approximation. His technique is still in use here and there, as it's simple to setup and produces reasonable results when viewed from space. The major drawback to this technique is that it doesn't account for the effects of multiple scattering at all.</p>
<div class="callout">
    <span class="icon">!</span>
    <p>I used Sean O'Neil's technique to render the atmosphere in <a href="/blog/2009/starfall-planet-rendering/">my Starfall demo</a>.</p>

</div>
<p>Shortly after, in 2008, Bruneton and Neyret crack the multiple scattering problem with <a href="https://doi.org/10.1111/j.1467-8659.2008.01245.x">Precomputed Atmospheric Scattering</a>. They use a large 4-dimensional lookup table packed into a 3D texture to evalaute multiple scattering, which can be precomputed on the fly using a GPGPU implementation. Although source code is made public, it's quite complicated, and adoption is somewhat hampered until 2017, when Eric Bruneton releases <a href="https://ebruneton.github.io/precomputed_atmospheric_scattering/">a new reference implementation</a>, which also improves on the original results. This model is still widely used, and looks stunning in many applications. The main drawbacks are that the lookup tables can cause rendering artefacts in some cases, they are fairly expensive to recompute, and the technique struggles to render very thick atmospheres.</p>
<h1 id="where-we-stand-today">Where we stand today</h1>
<p>And that brings us back to Sébastien Hillaire's <a href="https://doi.org/10.1111/cgf.14050">A Scalable and Production Ready Sky and Atmosphere Rendering Technique</a> in 2020. Hillaire eliminates the big 4D lookup table, mostly by noting that portions of the algorithm can be evaluated at significantly reduced resolutions without loss of quality in the output image. Armed with a set of much smaller lookup tables, he's able to update them all within a single frame, allowing artists live feedback as they adjust the atmosphere parameters. As a nice bonus, elimiating the big lookup table also eliminates a number of rendering artefacts, and he can render thicker atmospheres.</p>
<h1 id="what-s-next">What's next?</h1>
<p>The techniques I've covered up until now product great results for Earth-like atmospheres, but ideally we'd like to be able to render all sorts of planets, and we'd also like to be able to keep pushing the envelope on rendering fidelity.</p>
<p>With that in mind, <a href="https://doi.org/10.1109/TVCG.2020.3030333">Interactive Visualization of Atmospheric Effects for Celestial Bodies</a>, and <a href="https://diglib.eg.org/handle/10.1111/cgf15010">Physically Based Real-Time Rendering of Atmospheres using Mie Theory</a> both tackle rendering of the Martian atmosphere, by adding improved models for aerosol size and distribution (among other changes) to Bruneton's technique. This nets more accurate rendering of Earth's atmosphere as well, and somewhere along the way they even figured out how to render rainbows...</p>
<p>There also may be mileage to be had in stylised rendering, for applications that don't require photorealism. The only work I'm aware of in this direction is the aptly-named <a href="https://www.researchgate.net/publication/333369111_Aesthetically-Oriented_Atmospheric_Scattering">Aesthetically-Oriented Atmospheric Scattering</a>.</p>
<h1 id="see-also">See also</h1>
<p>The Virtual Terrain Project is sadly defunct, but they have a pretty good <a href="http://vterrain.org/Atmosphere/">overview of atmosphere rendering</a> up through Bruneton's work.</p>
<p>Bruneton himself surveyed the major atmosphere rendering approaches as of 2017 in <a href="https://doi.org/10.1109/TVCG.2016.2622272">A Qualitative and Quantitative Evaluation of 8 Clear Sky Models</a></p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>New beginnings</title>
        <subtitle>Drastic times call for drastic measures</subtitle>
		<published>2020-12-30T00:00:00+00:00</published>
		<updated>2020-12-30T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2020/new-beginnings/" type="text/html"/>
		<id>https://www.trist.am/blog/2020/new-beginnings/</id>
		<content type="html"> <![CDATA[ <p><img src="https://www.trist.am/blog/2020/new-beginnings/birds.jpg" alt="Crows circling the treetops" /></p>
<p>I have quit my job at Oculus, and immigrated to Spain.</p>
<span id="continue-reading"></span>
<p>External events have this habit of conspiring to throw your life into harsh focus. The first couple of months of Covid were relatively all right for me, I guess. I prefer working from home anyway. I don't mind entertaining myself at home. The next four months were a steady slide into monotony. Work and life blurring together until the passage of the days lose all meaning. Eat. Sleep. Work. Rinse and repeat. Lucky to be healthy and employed, but not feeling that way.</p>
<p>At the same time, the Brexit clock was rapidly counting down towards December 31st, and the moment when I lose my right to remain in the EU. I think it may be hard to grasp the significance of that from the outside, but EU citizenship is a privilege pretty much beyond comparison. The right to live and work in any of more than two dozen member countries, spanning a wide variety of languages, cultures, and climates. No visas, no work permits, no fuss.</p>
<p>So I sold what I could (which wasn't much, given the mass-exodus then underway in Seattle), hired someone to cart away the rest, terminated my lease, and flew over to Spain in October - before the winter lockdown closed that door forever. Met up with family on this side, so we could hunt for property here. And when the lockdown inevitably returned, rented an apartment to wait it out.</p>
<p>Spain is of course delightful, lockdown notwithstanding. Friendly people, excellent food, mild climate even up here in the north, and just a staggeringly low cost of living compared to urban America. After six weeks here my desire to go back to work under Covid conditions was just about at zero, so it was mildly fortuitous (albeit less so for my colleagues) when my team at work was on the receiving end of a fatal reorg - the day before I was scheduled to return. Followed shortly by the company deciding it wasn't ready to permanently allow remote work across international borders anyway. So I handed in my notice, and from today I'll be taking a bit of a break from the corporate side of tech.</p>
<p>What's next for me? Waiting out the lockdown, finding a small farm to restore, planting some orchards, and maybe getting a little arts and crafts going on the side. All while sorting out the many bits and pieces of paperwork that accompany immigration.</p>
<p>After that, we'll see. I'd like to figure out a tech niche that doesn't involve working 9-5. Which may prove to be an interesting challenge, as someone who prefers specialising in obscure technical areas, and has almost exclusively worked on big corporate projects the last decade 🤷‍♀️</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>SPIRV layout checking for Rust</title>
        <subtitle>Less error-prone CPU &lt;-&gt; GPU data transfer.</subtitle>
		<published>2020-01-05T00:00:00+00:00</published>
		<updated>2020-01-05T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2020/spirv-struct-layout/" type="text/html"/>
		<id>https://www.trist.am/blog/2020/spirv-struct-layout/</id>
		<content type="html"> <![CDATA[ <p>Isn't it just great when the GLSL compiler adds unexpected padding between fields in a buffer, and the only way you can tell is that your rendering is broken in weird ways? Having lost a couple of hours to one such bug, I decided I needed a way to quickly catch that sort of error.</p>
<span id="continue-reading"></span>
<p>Enter the <a href="https://crates.io/crates/spirv-struct-layout">spirv-struct-layout</a> crate. It's pretty straightforward - you define a rust struct, <code>#[derive(SpirvLayout)]</code> on it, and then later on you can invoke the <code>check_spirv_layout(...)</code> function with the SPIRV bytecode of your shader.</p>
<p>Let's say we start with the following buffer structure in GLSL:</p>
<pre data-lang="glsl" style="background-color:#2b303b;color:#c0c5ce;" class="language-glsl "><code class="language-glsl" data-lang="glsl"><span style="color:#b48ead;">layout</span><span>(std430, binding = </span><span style="color:#d08770;">0</span><span>) </span><span style="color:#b48ead;">buffer</span><span> Uniforms {
</span><span>  </span><span style="color:#b48ead;">mat4</span><span> model_view;
</span><span>  </span><span style="color:#b48ead;">vec3</span><span> light_dir;
</span><span>  </span><span style="color:#b48ead;">vec4</span><span> position;
</span><span>} buf;
</span></code></pre>
<p>If you spend a lot of time writing shaders, you may already have caught the problem we want to address here: the spec says <code>vec4</code> must be aligned to 16 bytes, so the compiler is going to add 4 bytes of padding after <code>light_dir</code> to ensure that <code>position</code> is correctly aligned.</p>
<p>Let's go ahead and define a rust struct to match this GLSL type:</p>
<pre data-lang="rust" style="background-color:#2b303b;color:#c0c5ce;" class="language-rust "><code class="language-rust" data-lang="rust"><span style="color:#b48ead;">use </span><span>spirv_struct_layout::{CheckSpirvStruct, SpirvLayout};
</span><span>
</span><span>#[</span><span style="color:#bf616a;">repr</span><span>(C)]
</span><span>#[</span><span style="color:#bf616a;">derive</span><span>(SpirvLayout)]
</span><span style="color:#b48ead;">struct </span><span>Uniforms {
</span><span>    </span><span style="color:#bf616a;">model_view</span><span>: [</span><span style="color:#b48ead;">f32</span><span>; 16],
</span><span>    </span><span style="color:#bf616a;">light_dir</span><span>: [</span><span style="color:#b48ead;">f32</span><span>; 3],
</span><span>    </span><span style="color:#bf616a;">position</span><span>: [</span><span style="color:#b48ead;">f32</span><span>; 4],
</span><span>}
</span></code></pre>
<p>And finally we can run the SPIRV layout check:</p>
<pre data-lang="rust" style="background-color:#2b303b;color:#c0c5ce;" class="language-rust "><code class="language-rust" data-lang="rust"><span style="color:#b48ead;">fn </span><span style="color:#8fa1b3;">main</span><span>() {
</span><span>    </span><span style="color:#b48ead;">let</span><span> spirv = cast_clice_u8_to_u32!(include_bytes!(&quot;</span><span style="color:#a3be8c;">simple.frag.spv</span><span>&quot;));
</span><span>
</span><span>    Uniforms::check_spirv_layout(&quot;</span><span style="color:#a3be8c;">buf</span><span>&quot;, spirv);
</span><span>}
</span></code></pre>
<p>And if all goes well, the program will exit with an error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>thread &#39;main&#39; panicked at &#39;assertion failed: `(left == right)`
</span><span>  left: `80`,
</span><span> right: `76`: field position should have an offset of 80 bytes, but was 76 bytes&#39;, spirv_struct_layout/examples/simple/main.rs:20:5
</span></code></pre>
<p>Now that the disaster has been averted, you have a few ways to address the underlying issue.</p>
<ul>
<li>The most straightforward approach is to just <a href="https://stackoverflow.com/a/38172697/1232666">never use vec3 in uniform buffers</a> - the alignment rules in GLSL just don't work out nicely when interchanging with host types, and various older GLSL compilers implement them incorrectly regardless.</li>
<li>You could also insert 4 bytes of padding on the rust side (i.e. insert a blank <code>f32</code> member between), although unused struct members are somewhat unergonomic to deal with in rust.</li>
<li>For this very simple example, you could also just switch the order of the last two entries in the struct - since the <code>vec4</code> neatly fills up the entire 16 bytes required by the alignment.</li>
<li>And lastly, you could build a <code>Vec3</code> type in Rust and use <code>#[repr(align(16))]</code> to <a href="https://play.rust-lang.org/?version=stable&amp;mode=debug&amp;edition=2015&amp;gist=cf533407553e292d5109556e342a4981">force the same alignment as GLSL uses</a> (but note Rust will also expand the size of the struct to 16 bytes in this case, meaning that <code>vec3</code> followed by a <code>float</code> will still end up with 4 bytes of padding between).</li>
</ul>
<p>This crate is still a work in progress, so when you run into rough edges, please report them <a href="https://github.com/swiftcoder/spirv-struct-layout">over on the github repo</a>.</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Hello, World!</title>
        <subtitle>Once more unto the breach.</subtitle>
		<published>2018-11-14T00:00:00+00:00</published>
		<updated>2018-11-14T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2018/hello-world/" type="text/html"/>
		<id>https://www.trist.am/blog/2018/hello-world/</id>
		<content type="html"> <![CDATA[ <p>It was past time I had a personal website, so here one is. If you came here looking for my older content, you'll find most of it over on my <a href="https://swiftcoder.wordpress.com" target="_blank">old wordpress</a>. I've only migrated a little of the historical content, seeing as most of it is very out of date, so there it will remain until wordpress shuts down (or the heat death of the universe, whichever comes first).</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Playfab Game Jame Postmortem</title>
        <subtitle>Beginners luck, I guess.</subtitle>
		<published>2015-10-26T00:00:00+00:00</published>
		<updated>2015-10-26T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2015/playfab-game-jam-postmortem/" type="text/html"/>
		<id>https://www.trist.am/blog/2015/playfab-game-jam-postmortem/</id>
		<content type="html"> <![CDATA[ <p><img src="https://www.trist.am/blog/2015/playfab-game-jam-postmortem/somethings-not-right-here.jpg" alt="Tangerines with googly eyes" /></p>
<p>A couple of weeks ago I participated in a 48-hour game jam hosted by PlayFab here in Seattle, with fellow procedural planet veteran <a href="http://www.alexcpeterson.com">Alex Peterson</a>, my good friend and composer <a href="https://github.com/llanginger">Leo Langinger</a>, and the fortunate last minute addition of artist <a href="https://brentrawls.wordpress.com">Brent Rawls</a>.</p>
<p>We were both <a href="https://blog.playfab.com/blog/congrats-to-the-seattle-game-jam-winners/">surprised and excited to have won this game jam</a>, especially given the number and quality of competing entries.</p>
<span id="continue-reading"></span>
<p>Our entry, the somewhat awkwardly-named Mad-scien-tile-ology, is a Unity-powered take on the classical 'match-3' game (a la Bejeweled or Candy Crush), with the addition of an all-consuming biological 'creep', which consumes the game board as the player attempts to match tiles in order to slow its inexorable progress:</p>
<div class="youtube">
    <iframe src="https://www.youtube-nocookie.com/embed/JLAPBIBRETg" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
</div>
<h1 id="what-went-right">What went right</h1>
<ul>
<li>Separation of responsibilities<br />
We had what I can only describe as an optimal team composition for such a short development cycle. Leo was able to focus on composing the music with a little sound design on the side, while Brent concentrated on the artwork, Alex handled all of the UI programming, and I wrote the gameplay logic.</li>
<li>Time management<br />
We hit the ground running, with an initial match-3 prototype playable early Saturday morning. Thereafter we went into planning mode, and white boarded the roadmap and scheduled checkpoints throughout the day for each deliverable and each integration point for assets. While the estimates weren't perfect, and we missed on a solid handful of items, the organisation helped us to hit 90% of what we set out to do, and still get 2 relatively decent nights of sleep during the competition.</li>
<li>Building on Unity<br />
Alex and I have both played around with Unity in the past, but neither of us had never shipped a full game thereon. Unity represented a fantastic time savings over building a game from scratch, and the asset pipeline alone saved us hours in wiring up the animations and audio.</li>
<li>Having an artist, and an unusual artstyle<br />
We hit on the idea of stop-motion papercraft before finding Brent, but honestly were it not for his efforts it would have been a disaster. Brent ran with the idea and produced visuals which are striking and unusual. The real-world textures of the paper, the bold colour palette and the stop-motion animations really help the game stand out from other games of this type.</li>
<li>Having a composer, and an original score<br />
It's easy to underestimate the impact of music on a video game, and as one of the only teams with a professional composer, I think we had an advantage out of the gate. Leo composed original scores for the title screen, gameplay loop, and victory/loss conditions. The upbeat and detailed music really helps sell the 'mad science' theme, and between composing he was able to produce a full range of foley effects for gameplay events that really help to sell the action on screen.</li>
<li>Playtesting, playtesting, playtesting<br />
We had a playable (if minimal) match-3 game from mid Saturday morning, and that allowed us to play test each new element as we added it. This can be a double-edged sword - when short on time, you can find yourself playing the game instead of implementing features, but it did give us a good idea of what did and didn't work in the context of the game, and allowed us to fit at least some balance tweaks into the time available.</li>
</ul>
<p><img src="https://www.trist.am/blog/2015/playfab-game-jam-postmortem/whiteboard.jpg" alt="Whiteboard containing both game design and project management" /></p>
<h1 id="what-didn-t-go-so-well">What didn’t go so well</h1>
<ul>
<li>Version control + Unity = not so good<br />
We are used to working with a variety of distributed version control systems, so at the start of the competition we threw everything into a git repository and went to town. Unfortunately, we quickly learned that Unity isn’t terribly well suited to git. While all the source files and assets are handled just fine, a great deal of the configuration and logical wiring is contained in the single main.scene file, and being a binary file, git only sees it as an opaque blob. After a couple of merges that resulted in having to rewire assets by hand, we had to fall back to editing separate scene files and copy/pasting to the main scene file before we merged.</li>
<li>Time is the enemy<br />
48-hours is not a long time, and irrespective of our planning, time grew increasingly tight as the competition progressed. While we were able to finish the game to a point we were fairly happy with, a number of features fell to the wayside, most notably highscores. We had intended to implement online leaderboards using our host PlayFab’s SDK, but that work had to be deprioritised to make time to fix critical gameplay bugs, and eventually we ran out of time.</li>
<li>Last-minute changes are not your friend<br />
This one largely follows from the last two points, but Alex and I both tweaked different elements right before we packaged the game for judging, and somewhere in our merge we managed to lose the explosion effect for the player’s super-meter, as well as drastically increasing the pace and difficulty of the game in the final build. Neither change badly affected our ability to demonstrate the game, but it’s a lesson learned to put the pencils down and focus on testing in the final hours.</li>
<li>Always be prepared to talk<br />
Winning the contest came out of left field, and the surprise coupled with a general lack of sleep had us roughly ad libbing our acceptance, and the subsequent quotes for the organiser’s press release. While one wouldn’t assume to win any competition, it turns out to be worth putting a few minutes of thought into what you would say if you do. Even a couple of sentences helps smooth over that deer-in-the-headlights moment.</li>
</ul>
<p><img src="https://www.trist.am/blog/2015/playfab-game-jam-postmortem/game-art-101.jpg" alt="Game art 101" /></p>
<h1 id="what-s-next">What’s next?</h1>
<p>We’re working on getting some of the more egregious bugs fixed, but if you’re of a mind to see how it is all put together, the source code and unity project is available over on GitLab. I don’t have binaries available for download yet, but we’ll try and make it available in a playable form when we have a few more of the kinks worked out.</p>
<p>And I’d be remiss if I didn’t give a shout out to PlayFab, for hosting (and catering!) a fantastic game jam, and our fellow competitors, who build some truly amazing games. Here’s looking forward to next time.</p>
<p><img src="https://www.trist.am/blog/2015/playfab-game-jam-postmortem/excited-users.jpg" alt="Jam attendees checking out our game after the competition" /></p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Permanence (or a lack thereof)</title>
        <subtitle>Impermanence in the digital age.</subtitle>
		<published>2015-03-13T00:00:00+00:00</published>
		<updated>2015-03-13T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2015/permanence-or-a-lack-thereof/" type="text/html"/>
		<id>https://www.trist.am/blog/2015/permanence-or-a-lack-thereof/</id>
		<content type="html"> <![CDATA[ <p>When Google VP Vint Cerf <a href="https://web.archive.org/web/20160714124506/http://eandt.theiet.org/news/2015/feb/vint-cerf-digital-data.cfm">warned that increased dependence on technology could lead to a 'digital dark age'</a>, he merely echoed the concern of everyone involved in the preservation of information in a digital world. While it is expedient to dismiss his claim as sensationalist and/or paranoid, Google's announcement yesterday that they are closing down the Google Code source code repositories provides an unfortunate echo to his cries.</p>
<span id="continue-reading"></span><div class="callout">
    <span class="icon">!</span>
    <p>Perhaps somewhat ironically, the link to Cerf's interview has gone dead three times now. Lacking any active news outlets still carying copies of that interview, I've given up and pointed the link at the wayback machine.</p>

</div>
<p>When I received Google's email detailing the repositories I have ownership over, I found a number of University projects, some python sample code, an entry to a video game competition, my now-venerable python user interface library, and one more item which I had forgotten about: a <a href="http://code.google.com/p/staranger/">collaboration some years back to build a video game</a>.</p>
<p>Like most such ventures, the collaboration fell apart after a few short weeks, the project creator and I went our separate ways, and I never heard from him again. But now, with the code scheduled to be consigned to oblivion within a year, it seemed like a good time to reach out and formally put the repository to rest.</p>
<p>It was then that I realised just how easy it is to lose information forever. I have an email address for the project's creator, but it turned out to be a long-defunct hotmail account, in the name of the project, not the user. The handful of of emails we exchanged don't list a real name, and mining various websites I was only able to find a possible first name, as well as a location of Christmas Island - a place so obscure I doubt he actually lived there. Team collaboration was largely accomplished through a private forum, but the project's website is long gone, the contents of the forum with it. The domain is still registered, but through a registrar in China, which doesn't list an owner in their whois records.</p>
<p>Long story short, unless he happens to read this blog post, I'll probably never hear from 'star.anger@hotmail.com' again. And in the greater scheme of things, it doesn't really matter: the game was never made, what small quantity of code made it to the repository will never be reused, and I doubt there is clear ownership of the code and assets regardless. The principle of it all still rankles, though.</p>
<p>For however short a time, a group of individuals came together to build something ambitious. That endeavour is over, the fleeting sense of camaraderie long gone. All that remains is an untouched repository and the half-remembrance of an anonymous typist behind a presumably-distant keyboard.</p>
<p>Who knows? Perhaps the other team members have stayed in touch. All that I know is that it's all too easy to lose track of people and things in a world based entirely on ones and zeroes...</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Logarithmic Spiral Distance Field</title>
        <subtitle>Turn, turn, turn</subtitle>
		<published>2010-06-21T00:00:00+00:00</published>
		<updated>2010-06-21T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2010/logarithmic-spiral-distance-field/" type="text/html"/>
		<id>https://www.trist.am/blog/2010/logarithmic-spiral-distance-field/</id>
		<content type="html"> <![CDATA[ <p>I have been playing around with distance field rendering, inspired by some of <a href="https://iquilezles.org/articles/distfunctions2d/">Iñigo Quílez's work</a>. Along the way I needed to define analytic distance functions for a number of fairly esoteric geometric primitives, among them the <a href="https://en.wikipedia.org/wiki/Logarithmic_spiral">logarithmic spiral</a>:</p>
<p><img src="https://www.trist.am/blog/2010/logarithmic-spiral-distance-field/Logarithmic_Spiral_Pylab.svg.png" alt="A logarithmic spiral drawn on a polar grid" /></p>
<span id="continue-reading"></span><div class="callout">
    <span class="icon">!</span>
    <p>Some years later, Inigo referenced this post in his legendary <a href="https://www.shadertoy.com/view/ld3Gz2">raymarched Snail</a> demo.</p>

</div>
<p>The distance function for this spiral is not particularly hard to derive, but the derivation isn't entirely straightforward, and it isn't documented anywhere else, so I thought I would share. I am only going to deal with logarithmic spirals centered on the origin, but the code is trivial to extend for spirals under translation.</p>
<p>Spirals are considerably more tractable in polar coordinates, so we start with the polar coordinate form of the logarithmic spiral equation:</p>
<p>\( r = ae^{b\Theta} \) (1)</p>
<p>Where (roughly) a controls the starting angle, and b controls how tightly the spiral is wound.</p>
<p>Since we are given an input point in x,y Cartesian form, we need to convert that to polar coordinates as well:</p>
<p>\( r_{target} = \sqrt{x^2 + y^2},; \Theta_{target} = atan(y/x) \)</p>
<p>Now, we can observe that the closest point on the spiral to our input point must be on the line running through our input point and the origin - draw the line on the graph above if you want to check for yourself. Since the logarithmic spiral passes through the same radius line every 360°, this means than the closest point must be at an angle of:</p>
<p>\( \Theta_{final} = \Theta_{target} + n *360^{\circ} \) (2)</p>
<p>Where n is a non-negative integer. We can combine (1) and (2), to arrive at an equation for r in terms of n:</p>
<p>\( r = ae^{b(\Theta_{target} + n*360^{\circ})} \) (3)</p>
<p>Which means we can find r if we know n. Unfortunately we don't know n, but we do know rtarget, which is an approximation for the value of r. We start by rearranging equation (3) in terms of n:</p>
<p>\( n = \frac{\frac{ln(\frac{r}{a})}{b} - \Theta_{target}}{360^{\circ}} \) (4)</p>
<p>Now, feeding in the value of rtarget for r will give us an approximate value for n. This approximation will be a real (float, if you prefer), and we can observe from the graph above that the closest point must be at either the next larger or smaller integer value of n.</p>
<p>If we take the floor and ceil of our approximation for n, we will have both integer quantities, and can feed each value back into equation (3) to determine the two possible values of r, r1 and r2. The final step involves finding which of these is the closest, and the distance thereof:</p>
<p>\( min(|r_1-r|, |r_2-r|) \)</p>
<p>And there you have it:</p>
<p><img src="https://www.trist.am/blog/2010/logarithmic-spiral-distance-field/spiral.png" alt="Distance field for a logarithmic spiral" /></p>
<p>The python source code below produces the image shown above, as a 1000x1000 pixel image PNM image written to stdout. If you aren't familiar with the PNM format, it is an exceedingly simple ascii-based analogue of a bitmap image, and can be loaded directly in GIMP.</p>
<pre data-lang="python" style="background-color:#2b303b;color:#c0c5ce;" class="language-python "><code class="language-python" data-lang="python"><span style="color:#b48ead;">import </span><span>math
</span><span>
</span><span style="color:#b48ead;">def </span><span style="color:#8fa1b3;">spiral</span><span>(</span><span style="color:#bf616a;">x</span><span>, </span><span style="color:#bf616a;">y</span><span>, </span><span style="color:#bf616a;">a</span><span>=</span><span style="color:#d08770;">1.0</span><span>, </span><span style="color:#bf616a;">b</span><span>=</span><span style="color:#d08770;">1.0</span><span>):
</span><span>  </span><span style="color:#65737e;"># calculate the target radius and theta
</span><span>  r = math.</span><span style="color:#bf616a;">sqrt</span><span>(x*x + y*y)
</span><span>  t = math.</span><span style="color:#bf616a;">atan2</span><span>(y, x)
</span><span>
</span><span>  </span><span style="color:#65737e;"># early exit if the point requested is the origin itself
</span><span>  </span><span style="color:#65737e;"># to avoid taking the logarithm of zero in the next step
</span><span>  </span><span style="color:#b48ead;">if </span><span>(r == </span><span style="color:#d08770;">0</span><span>):
</span><span>    </span><span style="color:#b48ead;">return </span><span style="color:#d08770;">0
</span><span>
</span><span>  </span><span style="color:#65737e;"># calculate the floating point approximation for n
</span><span>  n = (math.</span><span style="color:#bf616a;">log</span><span>(r/a)/b - t)/(</span><span style="color:#d08770;">2.0</span><span>*math.pi)
</span><span>
</span><span>  </span><span style="color:#65737e;"># find the two possible radii for the closest point
</span><span>  upper_r = a * math.</span><span style="color:#bf616a;">pow</span><span>(math.e, b * (t + </span><span style="color:#d08770;">2.0</span><span>*math.pi*math.</span><span style="color:#bf616a;">ceil</span><span>(n)))
</span><span>  lower_r = a * math.</span><span style="color:#bf616a;">pow</span><span>(math.e, b * (t + </span><span style="color:#d08770;">2.0</span><span>*math.pi*math.</span><span style="color:#bf616a;">floor</span><span>(n)))
</span><span>
</span><span>  </span><span style="color:#65737e;"># return the minimum distance to the target point
</span><span>  </span><span style="color:#b48ead;">return </span><span style="color:#96b5b4;">min</span><span>(</span><span style="color:#96b5b4;">abs</span><span>(upper_r - r), </span><span style="color:#96b5b4;">abs</span><span>(r - lower_r))
</span><span>
</span><span style="color:#65737e;"># produce a PNM image of the result
</span><span style="color:#b48ead;">if </span><span>__name__ == &#39;</span><span style="color:#a3be8c;">__main__</span><span>&#39;:
</span><span>  </span><span style="color:#b48ead;">print </span><span>&#39;</span><span style="color:#a3be8c;">P2</span><span>&#39;
</span><span>  </span><span style="color:#b48ead;">print </span><span>&#39;</span><span style="color:#a3be8c;"># distance field image for spiral</span><span>&#39;
</span><span>  </span><span style="color:#b48ead;">print </span><span>&#39;</span><span style="color:#a3be8c;">1000 1000</span><span>&#39;
</span><span>  </span><span style="color:#b48ead;">print </span><span>&#39;</span><span style="color:#a3be8c;">255</span><span>&#39;
</span><span>  </span><span style="color:#b48ead;">for </span><span>i </span><span style="color:#b48ead;">in </span><span style="color:#96b5b4;">range</span><span>(-</span><span style="color:#d08770;">500</span><span>, </span><span style="color:#d08770;">500</span><span>):
</span><span>    </span><span style="color:#b48ead;">for </span><span>j </span><span style="color:#b48ead;">in </span><span style="color:#96b5b4;">range</span><span>(-</span><span style="color:#d08770;">500</span><span>, </span><span style="color:#d08770;">500</span><span>):
</span><span>      </span><span style="color:#b48ead;">print </span><span>&#39;</span><span style="color:#d08770;">%3d</span><span>&#39; % </span><span style="color:#96b5b4;">min</span><span>( </span><span style="color:#d08770;">255</span><span>, </span><span style="color:#bf616a;">int</span><span>(</span><span style="color:#bf616a;">spiral</span><span>(i, j, </span><span style="color:#d08770;">1.0</span><span>, </span><span style="color:#d08770;">0.5</span><span>)) ),
</span><span>    </span><span style="color:#b48ead;">print
</span></code></pre>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>The Price of Progress</title>
        <subtitle>Faster hardware, slower software. What&#x27;s wrong with this picture?</subtitle>
		<published>2010-01-21T00:00:00+00:00</published>
		<updated>2010-01-21T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2010/the-price-of-progress/" type="text/html"/>
		<id>https://www.trist.am/blog/2010/the-price-of-progress/</id>
		<content type="html"> <![CDATA[ <p>I recently installed the beta of Microsoft Office 2010, and the first thing that struck me is how it performs noticeably worse on my 3.0 GHz quad-core gaming PC, than Office '98 performed on a now 12-year-old PowerBook G3, powered by a little 250 MHz PPC processor.</p>
<p>You can probably guess the next stage of this anecdote... Office '98 on that G3 performed ever-so-slightly worse than Office 4.0 on a truly ancient PowerBook 180, which sported a fantastic (for the time) 33 MHz Motorola 68030 CPU.</p>
<span id="continue-reading"></span>
<p>Now, I am not being entirely fair here - the spellchecker is much faster, the grammar checker didn't even exist back then, the user interface only had to render at a 1024x768 resolution in those days, and various other ancillary features have been added and improved. But the core issue remains, Office 2010 (or 2007, which is not in beta) running on a decent gaming PC, takes longer to launch and is less responsive to keyboard input than Office 4.0 on an 33 MHz 68k.</p>
<p>And the problem isn't restricted to Microsoft products alone, as many pieces of software have suffered the same sort of creep, not least among them the Mac and Windows operating systems.</p>
<p>In the open-source world and among smaller developers this phenomenon is far less common: a well configured linux or BSD installation boots in a handful of seconds, Blender (sporting most of the features of expensive software such as 3DS Max and Maya) launches immediately and always remains responsive, and Maxis' Spore takes minutes to start up and load a game while Eskil's Love throws you into the game in under 10 seconds.</p>
<p>My current computer is many thousands of times faster than that PowerBook 180, so in theory at least, we should be able to do far more, and do the same old things much faster. Why then the slowdown?</p>
<p>It can't be lack of resources - we are talking about companies such as Microsoft, Apple and Adobe, all with enormous R&amp;D and development budgets, and teams of experienced programmers and engineers. Besides, the open-source guys manage just fine, some with just a handful of programmers, and most with no budget whatsoever.</p>
<p>It has been argued that <a href="https://web.archive.org/web/20080605152435/http://hq.fsmlabs.com/~cort/papers/lazy/lazy.nohead.html">programmer laziness</a> (a.k.a. badly educated programmers) is to blame, but I am not sure this can be the entire story. Certainly the 'dumbing down' of University-taught computer science hasn't helped, nor has the widespread rise of languages that 'protect' the programmer from the hardware, nor the rise of programming paradigms that seek to abstract away from low-level knowledge. But that is the topic of another rant, and is somewhat tangential to the topic at hand. Companies can afford to hire the best programmers, and could if they wanted to, create the demand necessary to reform education practices.</p>
<p>And that brings us to the real heart of the issue: software developers measure success in terms of sales and profit. As long as your software sells, there is no need to spend money on making the software perform better. And if you happen to have a virtual monopoly, such as Microsoft's Office or Adobe's Photoshop, then there is no incentive to improve the customer's experience, beyond what is needed to sell them a new version each year.</p>
<p>However, when you lose such a monopoly, the game changes, and it generally changes for the better. When FireFox, Opera and later Safari started cutting a swathe into Microsoft's Internet Explorer monopoly, Microsoft was forced to adapt. The latest version of Internet Explorer is fast, standards compliant, and relatively free of the virus infection risks that plagued earlier versions.</p>
<p>This outcome of the browser war has led at least a few to the conclusion that open-source is the answer, and that open-source will inevitably recreate what has been developed commercially, and either surpass that commercial product, or force it to evolve. Sadly, I don't see this happening particularly quickly, or on a wide scale - OpenOffice is playing catch-up in its efforts to provide an out-of-the-box replacement for Microsoft Office, GIMP lags far behind Photoshop, and linux, despite widespread adoption in a few key fields (namely budget servers and embedded devices) still lags far behind Windows and Mac in many areas.</p>
<p>For many years this wasn't a problem - every few years you would buy a new computer, typically an order of magnitude faster than the computer it replaced. If new versions of your software consumed a few million more cycles, well, there were cycles to burn, and besides, the hardware companies needed a market for faster computers, didn't they?</p>
<p>Nowadays the pendulum is swinging in the opposite direction. Atom powered netbooks, Tegra powered tablets, ARM powered smartphones - all of these promise a full computing experience in tiny packages with minimal power consumption. Even though the iPhone in your hand is considerably more powerful than that 33 MHz PowerBook 180, it doesn't have even a fraction of the computing power offered by your shiny new laptop or desktop. And users expect a lot more than they did in the early nineties - animated full colour user interfaces, high definition streaming video and flash applications, oh, and don't drain the battery!</p>
<p>CPU cycles today are becoming as precious as they ever were, only now many of our programmers have no experience of squeezing every last drop of performance out of them. Has the business of software development come full circle, and once again become the territory of the elite 'low-level' programmer?</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Starfall: Planet Rendering</title>
        <subtitle>Now with atmosphere</subtitle>
		<published>2009-05-28T00:00:00+00:00</published>
		<updated>2009-05-28T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2009/starfall-planet-rendering/" type="text/html"/>
		<id>https://www.trist.am/blog/2009/starfall-planet-rendering/</id>
		<content type="html"> <![CDATA[ <p>I just posted a quick youtube video to demonstrate the current state of the planet renderer. This is early development stuff, and the eye candy is minimal, but it should give you some idea of the scope.</p>
<div class="youtube">
    <iframe src="https://www.youtube-nocookie.com/embed/8aQhCO6hpu4" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
</div>
<p>Part of the rationale behind this video is to streamline the whole video capture and posting process. Unfortunately, it hasn’t been entirely straightforward so far. I went through a number of video capture tools before settling on FRAPS, which works well enough (though I would have prefered a free tool).</p>
<p>I also have had a terrible time converting the video for youtube – ATI’s Avivo video converter is blazingly fast, but apparently produces an incompatibe audio codec in any of the high-quality settings. I was forced to fall back to the CPU-based Auto Gordian Knot, which both does a worse job, is very slow on my little Athalon 64 x2.</p>
<p>I am now experimenting with ffmpeg, but the command line options are confusing to say the least. If anyone has any clues/tips/tricks for getting FRAPS encoded video (and audio) into a suitable format for youtube HD, please let me know.</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Procedural Planets</title>
        <subtitle>Let&#x27;s turn the terrain rendering up to 11</subtitle>
		<published>2009-05-02T00:00:00+00:00</published>
		<updated>2009-05-02T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2009/procedural-planets/" type="text/html"/>
		<id>https://www.trist.am/blog/2009/procedural-planets/</id>
		<content type="html"> <![CDATA[ 
  <figure class="right" >
    <img src="https://www.trist.am/blog/2009/procedural-planets/planet.webp" alt="screenshot of a planet rendered from low orbit" />
    
      <figcaption class="center">Normal-mapped planet.</figcaption>
    
  </figure>
<p>The semester is over at last, and my grades should be in by Monday. It has been a tiring semester, but not a bad one, with interesting courses in database implementation and parallel architectures, not to mention philosophy.</p>
<p>With the end of the semester comes a little more free time, and I have spent a chunk of it recreating my old procedural planet demo in Python/Pyglet.</p>
<p>The first big plus is that whereas my previous implementation was over 5,000 lines of C++ code, the python version is under 1,000 loc, plus a few hundred lines of tightly optimised C for the tile generation back end.</p>
<p>The other plus is that this version actually works. I finally found the time to fully implement the crack fixing algorithm, and the results are very good (although there are still a couple of unresolved edge cases).</p>
<p>I also implemented normal map generation in GLSL, to offload the computation to the GPU. This more than doubles the performance of tile generation, to the point where several tiles can be subdivided or combined each frame.</p>

  <figure class="right" >
    <img src="https://www.trist.am/blog/2009/procedural-planets/planet-wire.webp" alt="screenshot of the wireframe of a planet rendered from low orbit" />
    
      <figcaption class="center">Wireframe of planet.</figcaption>
    
  </figure>
<p>From a technical standpoint, the planet starts as a cube, and each face is subdivided to form a quad tree. For each tile at the deepest level of the quad tree, a small heightmap is generated using 32 octaves of 3-dimensional simplex noise. This heightmap is used to generate the vertices for the tile, and passed as a texture to the GPU in order to generate the normal map.</p>
<p>Because applying tangent-space normal maps to a sphere is an absolute nightmare, I take the unusual approach of generating object-space normal maps. These are considerably more expensive to generate, but avoid the tangent-space mismatch at cube edges, and look fairly decent in practice.</p>
<p>Interestingly, this version allows one to fly right down to the planet surface, and maintains interactive framerates even on my Intel integrated X3100 (complete with the awful Mac drivers). By the time I add atmosphere shaders and detail textures, I expect that I will have to switch over to my desktop, but for now, I am very happy with the performance.</p>
<p>Of course, there are still several challenges to overcome, in particular the issue of depth buffer precision. The planet you see above does not suffer from a lack of depth buffer precision, but it is only 1/10 scale of the Earth, and ideally I would like to be able to render gas giants on the order of Jupiter as well. This requires some clever on-the-fly adjustment of the camera frustum, and I don’t quite have a handle on the best way to accomplish it.</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Ashima IV</title>
        <subtitle>Entry to the uDevGames 2009 gamedev competition</subtitle>
		<published>2009-03-05T00:00:00+00:00</published>
		<updated>2009-03-05T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2009/ashima-iv/" type="text/html"/>
		<id>https://www.trist.am/blog/2009/ashima-iv/</id>
		<content type="html"> <![CDATA[ 
  <figure class="right" >
    <img src="https://www.trist.am/blog/2009/ashima-iv/splash.webp" alt="screenshot of the game&#x27;s splash screen&#x2F;main menu" />
    
      <figcaption class="center">Splash screen&#x2F;main menu</figcaption>
    
  </figure>
<p>I haven’t had much time to update here in a while, having been hard at work on an entry for the udevgames contest. If you read my earlier post, you may recall I was initially going to enter a pixel-art lemmings clone, but midway through development, I decided that the concept was basically not fun to play.</p>
<p>So I switched over to building a 3D space-sim, with only a single month remaining in the competition. The plus side was that I have much more experience with 3D graphics, but lack of time was still a killer. I can’t honestly say it is much more fun to play, given its unfinished state, but the core gameplay is certainly there.</p>
<p>The game is basically a prototype, and an example of a larger game produced with Python and Pyglet. As per the rules of the competition, the game is open source, although in its present state, it probably isn’t much use to anyone. When I have the time, I intend to cleanup and comment the code, which will hopefully be useful to others starting out with Python, Pyglet, or games development in general.</p>

  <figure class="right" >
    <img src="https://www.trist.am/blog/2009/ashima-iv/gameplay.webp" alt="screenshot of in-game spaceship combat" />
    
      <figcaption class="center">Space combat</figcaption>
    
  </figure>
<p>Despite the code being cross-platform, I only have a Mac binary up for now, as I have unable to coerce ODE into even a semblance of stability under Windows. When or if I manage to sort the crashes out, it will run fine on both platforms.</p>
<p>So if you have a Mac, and want to take the binary out for a spin, you can grab it from <a href="https://web.archive.org/web/20090409232654/http://www.udevgames.com/games/entry/ashima_iv">http://www.udevgames.com/games/entry/ashima_iv</a>, or if you would like to browse the source, visit the project page at https://github.com/swiftcoder/ashima-iv. While you are about it, consider taking the time to check out the other entries to this year’s udevgames contest, and remember to vote for your favourites!</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Rendered height maps</title>
        <subtitle>The beginnings of terrain rendering</subtitle>
		<published>2008-11-17T00:00:00+00:00</published>
		<updated>2008-11-17T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2008/rendered-height-maps/" type="text/html"/>
		<id>https://www.trist.am/blog/2008/rendered-height-maps/</id>
		<content type="html"> <![CDATA[ 
  <figure class="right" >
    <img src="https://www.trist.am/blog/2008/rendered-height-maps/heightmap.webp" alt="screenshot of a patch of terrain, rendered with different colours depending on height" />
    
      <figcaption class="center">A combination of simplex and voronoi noise rendered as a height map.</figcaption>
    
  </figure>
<p>Spending a little time on the visualisation side of things at the moment. At right you can see the result of rendering the previous images as a height map. A simple colour map is applied based on elevation, and the terrain is lit.</p>
<p>The terrain is fairly low resolution, only 128×128 vertices, while the height map is 1024×1024 pixels. To increase the quality of the lighting, a normal map is generated from the height map, also at 1024×1024 pixels. This yields 8:1 vertex to texel ratio, giving very decent performance while still rendering at a high quality.</p>
<p>The terrain is actually split into 4 equal sized patches, 64×64 vertices each, in order to aid culling. At some point I will improve this system into an adaptive quadtree, which should provide far better performance.</p>
<p>I am rendering this all on an Intel integrated X3100 graphics processor, and so far I have been very impressed with the performance. Despite running commercial games extremely badly, this card seems to take no performance hit when using shaders instead of fixed function rendering, and in fact, the current shader-based normal mapping is faster than the previous fixed-function lighting.</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Erosion</title>
        <subtitle>Now we take the noise away again</subtitle>
		<published>2008-11-12T00:00:00+00:00</published>
		<updated>2008-11-12T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2008/erosion/" type="text/html"/>
		<id>https://www.trist.am/blog/2008/erosion/</id>
		<content type="html"> <![CDATA[ 
  <figure class="right" >
    <img src="https://www.trist.am/blog/2008/erosion/erosion.webp" alt="Several sample images of generated noise" />
    
      <figcaption class="left">Clockwise from top left: base height map, erosion with distance 1, erosion with distance 10, and erosion with distance 100.</figcaption>
    
  </figure>
<p>After using a mixture of voronoi and simplex noise to generate a height map, I realised that it doesn’t look very good – in particular the noise generation creates hard edges and jagged formations everywhere.</p>
<p>The solution to this is, or course, erosion. At right you can see the results of applying a very simple approximation of thermal erosion to generated height map.</p>
<p>This implementation is an image space post process. For each pixel, a fixed height is subtracted, and the algorithm then travels downhill, until it either reaches a maximum distance, or finds no adjacent pixel lower than the current pixel, at which point it deposits (adds) the same fixed height.</p>
<p>The images at right were generated by applying ten repetitions of this filter, for various values of the distance parameter. As you can see, larger distances tend to preserve detail on slopes, and result in large flat areas. Smaller values smooth out slopes as well, but don’t greatly affect the overall shape of the terrain.</p>
<p>This looks pretty good for a first stab at erosion, and is very fast, but I expect that a full implementation of thermal and hydraulic erosion will look substantially better.</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>3D Noise</title>
        <subtitle>Entering the third dimension</subtitle>
		<published>2008-11-11T00:00:00+00:00</published>
		<updated>2008-11-11T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2008/3d-noise/" type="text/html"/>
		<id>https://www.trist.am/blog/2008/3d-noise/</id>
		<content type="html"> <![CDATA[ 
  <figure class="right" >
    <img src="https://www.trist.am/blog/2008/3d-noise/planet.png" alt="Several sample images of generated noise" />
    
      <figcaption class="left">From top: texture map for sphere generated using a blend of voronoi and value noise, and the previous texture applied to a sphere.</figcaption>
    
  </figure>
<p>Yesterday I set about generating 3D noise, in particular, texture maps for 3D planets. It sounds like a relatively straightforward extension of 2D noise, but unfortunately it didn't turn out that way.</p>
<p>First up, a SphereMapper generator. This handy little class takes a 2D coordinate, and transforms it into a 3D coordinate on the surface of a sphere, just basic Polar to Cartesian conversion. Of course, this generates 3D coordinates on a unit sphere, and while the front looked all right, the back looked absolutely terrible.</p>
<p>Turns out my Voronoi implementation didn't work correctly with negative coordinates. This required only a simple fix, but unfortunately that fix required axing the optional tiling qualities I had added in the day before.</p>
<p>Then it also turned out that my Simplex implementation didn't work with negative coordinates either. I haven't figured out a fix for this yet, so in the meantime, I have implemented a version of Value Noise (as described by Hugo Elias). This is a fairly decent approximation of Perlin/Simplex noise, and does work with negative coordinates, but the quality is a little worse, and the high quality version is considerably more expensive than simplex noise.</p>
<p>I will have to fix the simplex noise at some point, but in the meantime, value noise is a good generator to have, and it makes for pretty decent looking planets.</p>
<p>I also notice that my OpenGL sphere class has a lot of texture distortion in the polar regions - far more than should be expected. Apparently generating a sphere out of stacks and slices isn't as straight forward as one would imagine...</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Noise Again</title>
        <subtitle>Sounds intensify</subtitle>
		<published>2008-11-08T00:00:00+00:00</published>
		<updated>2008-11-08T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2008/noise-again/" type="text/html"/>
		<id>https://www.trist.am/blog/2008/noise-again/</id>
		<content type="html"> <![CDATA[ 
  <figure class="right" >
    <img src="https://www.trist.am/blog/2008/noise-again/noise2.webp" alt="Several sample images of generated noise" />
    
      <figcaption class="left">From top: peturbed voronoi, perturbed voronoi added to simplex fBm.</figcaption>
    
  </figure>
<p>Shortly after my last post, I realised my Voronoi basis had a problem: only the distance was taken into account for each cell. This has been corrected, with each cell now assigned a random base value, to which the distance is added.</p>
<p>At the same time, I noticed that the voronoi basis wasn't much use on its own - polygons are an unusual shape in nature. This lead to the addition of a Turbulence module, which perturbs the input coordinates according to another generator.</p>
<p>Together these additions allow for some striking images, and adding these to a few octaves of simplex noise lends itself well to terrain - the bottom image makes for a quite convincing height map, though improvements can obviously still be made.</p>
<p>I am also testing the inclusion of an additional diamond-square generation technique, but it doesn't play very well with other approaches. Unfortunately, diamond-square generation can only be used for square images with power-of-two dimensions, must be generated an entire image at a time (which pretty much precludes mixing it with other noise types), and only works with basis generators which have a gaussian distribution (i.e. not voronoi). Diamond-square does however offer very fast generation, so I think it will be included in the library - specifically for those applications that need extremely fast generation of fBm-like textures.</p>
 ]]> </content>
	</entry>
	<entry xml:lang="en">
		<title>Noise Library</title>
        <subtitle>Let&#x27;s make some noise!</subtitle>
		<published>2008-11-07T00:00:00+00:00</published>
		<updated>2008-11-07T00:00:00+00:00</updated>
		<link href="https://www.trist.am/blog/2008/noise-library/" type="text/html"/>
		<id>https://www.trist.am/blog/2008/noise-library/</id>
		<content type="html"> <![CDATA[ 
  <figure class="right" >
    <img src="https://www.trist.am/blog/2008/noise-library/noise.png" alt="Several sample images of generated noise" />
    
      <figcaption class="left">Clockwise from top left: simplex basis, voronoi basis, 8 octaves of simplex added to a voronoi basis, the previous tile rendered with a 4 colour palette.</figcaption>
    
  </figure>
<p>I recently moved all my graphics and games development over to Python, using Pyglet. Overall this has been a very good change, with a great increase in productivity, but unfortunately it has caused a few problems.</p>
<p>Previously, I had been using libnoise for all procedural generation, but it turns out that Pyglet is implemented using ctypes, while the only available python bindings for libnoise were generated using SWIG.</p>
<p>Naturally, the developer's of ctypes and SWIG never made basic pointers compatible (nor with boost::python), so it turns out that there is no way to load a libnoise generated image into a Pyglet/OpenGL texture.</p>
<p>I haven't been entirely happy with libnoise for sometime (primarily because of difficulties tiling voronoi noise), so this gives me the perfect excuse to dive in and implement my own noise library.</p>
<p>The library is developed in C++ (for efficiency), and has an external interface written in C, to easily interface with Python (using ctypes). At this stage the entire library is contained in a single source file, and weighs in at just under 500 lines of code.</p>
<p>The code itself is flexible and extensible: You create one or more Generators (which can each combine other Generators), and a Renderer, and feed both to a Generate function.</p>
<p>The image above was generated by a python script, and shows off all the current features of the library. Two generators are provided (simplex noise and voronoi), which can be combined into octaves (fractal-brownian motion), and blended together (weighted addition). The image can then be rendered in greyscale (such as for a heightmap), or rendered with a colour palette (as in the lower left image), and in either unsigned byte or float precision.</p>
<p>I hope to add several generators, in particular more blending functions, in the coming weeks. After that, with a little code cleanup, I think an open-source google code release is likely.</p>
 ]]> </content>
	</entry>
</feed>
