Now, the matter at hand is less of a success story and more of a clash of interests. A week ago, the Architectural/CAD community took great interest in the news that Vannevar Tech, a Google X spinoff, works to develop an innovative new technology (derived from the secret Google X project codenamed Genie) that will disrupt the AEC playing field. In fact, the single reason Arch/CAD got excited about it is the big name of Google behind Vannevar Tech. No further details whatsoever were released about it except that the Google executive report mentioned an absolutely ridiculous amount of $120Bn possible annual revenue from this development.
Now, we all know that professional media have the depth and scope of a drugged wombat in their tech news reporting, so I went to do their job and mine the issue a little deeper and speculative-er. An abuse of some black magic yielded search information about a job posting that Vannevar has since removed and, as expected, it mentioned web visualization technologies and typical web languages, as well as the approach of “applying principles from software engineering, scalable computing, and open platforms to building design“. So, another cloud CAD quackery, yes? Yes. I said “OK, cool story”, vented off on Twitter and went along.
Yesterday, the same source published this messsage about Eli Attia, an architect who, to his own testimony, was the inventor of the supposedly revolutionary technology behind Google’s Genie. According to Attia, Google executives basically ripped him off, and it’s not even about the money – it is about the ruined future of the technology that Attia names his life’s work.
The select few (like Ralph Grabowski) who had the guts to read Attia’s plea in detail probably thought, okay, I know this type of people. They have all the delusions of grandeur yet really never accomplish anything. And the plea is, in fact, horribly composed and is obviously biased. And mentions of manbearpig Al Gore and Michael Moore don’t really help, either.
Problem is, Eli Attia is not exactly this type. He is an accomplished architect and, even more importantly, an author with real patents behind his name. This makes him fall out of this science/tech freak trope. So I decided to look up if there is any unbiased information regarding his Engineered Architecture concept and figure out if there is any possibility that Vannevar Tech indeed ripped off his invention.
And it is basically a large illustration of the fact that certain high-rises may express certain geometrical regularities and can be theoretically described as a modular assembly.
And yes, don’t even ask, the star chart from BIM bullshit bingo is there, too.
You should probably read it yourself. This is not in any way an invention. This is an idea that tries too hard to be an invention. And if Vannevar Tech really builds their revolutionary product on this idea (highly unlikely from what I cited above in the job posting), people who expect them to live up to the hype will rue the day they bought the proverbial Vannevar Tech stock.
Of course we feel compassion towards Attia because many of us experience the similar fears sometimes. What if my brilliant idea is worth millions? What if it gets stolen by a corporation of great lawyer-might as soon as the idea’s value becomes obvious? The answer is no, your idea isn’t worth a penny, however bright it is. Ideas, in fact, are public domain. The whole problem with the patent system abuse is because patents are now increasingly used to secure ideas, not inventions, and that is deeply wrong.
I am developing Shadowbinder, a processor to manipulate simple geometry using parametric mappings, so here are some fancy images to spark your interest. The core concept of the tool is to let a non-programming engineer feel as free in computational processing of geometrical models as it can get without having to actually program.
Here is the implementation of an orthospheric scaling of the arbitrary geometry built in parameter space. The operation needs a simple model file that describes operations that we want to perform on our geometry. It looks like this:
As you can guess, the [[Target]] section specifies that the transformation should be performed in orthospheric space.
We scale our model by a factor of 3 along one parameter axis (angle ‘delta’).
The tool then writes the results according to the model file to as many outputs as we specify (currently DXF, TXT, and some proprietary formats that I needed for my work are implemented).
1. Input geometry.
2. Orthospheric scaling results.
3. Another view of the resulting shape.
After performing the geometric transformations described in model file, the tool seamlessly builds a finite element model for further analysis.
Why is this interesting? The workflow file and a geometry file together form a low-tech data model that separates different sorts of information. We have our geometry in a dumb DWG input file and we have the meaningful things that we want to do with it in a separate model file that describes our geometric workflow.
The parametric mapper is extensible, which means that any engineer can augment it with whatever transformation he needs. As we combine this extensibility of geometric manipulation with options to link the processor to its inputs and outputs in a more persistent way (using web identifiers of data or a cloud connection), we will have a component built with a model paradigm in mind.
Developing Shadowbinder is a trial to see how I can implement a low-tech model-oriented approach to my current structural analysis process. I currently use it to create models that I otherwise wouldn’t be able to create, like this post-tensioned shell:
4. Model mapping of a post-tensioned shell
5. FEA results on the post-tensioned shell
Please share your ideas and current low-tech automation needs so that I can widen the application scope.
Model-Based Engineering (MBE) is getting more attention these days and in order to explore it, I came up with this certain sort of a roadmap.
To set the stage for the discussion of the model paradigm, here’s one little problem, the problem of scope and applicability of ideas coming from the community around engineering/design software. The issues we discuss are about data, automation, organization and technology. They are not restricted to engineering and design, they spill out to many other STEM fields, and notably to architecture and art too. The COFES congress, as an example, flies the banners of engineering software even in its name, yet the full scope and applicability of what we call engineering software agenda definitely extends beyond engineering and is more elusive.
Venturing into the model paradigm
I would argue that the most usable notion to differentiate the scope of engineering software world is the notion of model. While it’s a banal fact that computational math permeates all the sectors of STEM, it is modeling and consequent usage of models that permeates engineering and the related fields that we usually connect with it. Model-Based Engineering (MBE) has emerged to get the model into spotlight.
MBE will likely mature and become a typical engineering acronym with an industry of its own software products to service and a host of satellite terms like Model-Based Manufacturing. That means that the term will evolve to deal with a certain and discrete subset of model-related engineering that could be described as “high-end MBE”. My interest here is to explore the usage of models on more simplistic levels that are also hopefully applicable to a wider nomenclature of disciplines like engineering, architecture, geometry modeling, conceptual design, and structural analysis.
A little overhaul to the understanding of model notion
What is good in an emerging technology is that it is often vague and soft enough to slam into. Along the lines of what Evan Yares wrote (see Related Reads), MBE talk can mean different things to different people. This is why it seems reasonable to explicitly redefine some notions to be able to explore the model paradigm freely and in parallel to MBE proper.
The apparently accepted (and the only one detailed enough that I know of) definition of Model-Based Engineering can be found in some relevant NIST documents and it is only rational to start with it. To get your interest, here are some notable differences between this ‘proper’ image of MBE and what we will be setting up here.
NIST take: a model is a representation or idealization of the characteristics of a real-world system
My comment: the essence of model is the representation, aka mapping. Let’s also detail that we want to be able to build models for phenomena that are not usually seen as a real-world system (i.e. a product): design processes, workflows, information flows, down to operations on data storages, files and folders. If there is an activity or an artifact, there is always a model for it.
NIST take: models can be either computational or descriptive.
My comment: instead, let’s accept and develop the free multi-tier classification: models can be descriptive (for existing artifacts) or prescriptive (for those to be designed); declarative (what should we have) or imperative (how to get there); they can be computable, graphical, or semantic.
NIST take: core to MBE is the integration of descriptive models with computational models.
My comment: this is a very ambitious pursuit and this is why I refer to ‘proper’ MBE as something high-end. In the multi-tier classification that we will be using, integration tasks are worth a separate discussion. Instead, let’s speculate that since there always is a model, the goal of model paradigm in engineering is to build the representation for the model that you already have in your head, while also deciding which models are worth building and which should be left in the engineer’s mind.
This is about all for now, so let’s set some goalposts to be headed to.
Where we are going from here
There are subtopics and related issues to model paradigm worth a separate discussion. Here are some of them.
Social and educational aspects of engineering model awareness. If developed properly, model paradigm could significantly disrupt the understanding of engineering with those entering the field.
Terminology, notions, and concepts. We will employ existing knowledge on the subject to describe a conceptual framework of the simplistic model paradigm. Let’s explore a set of goals for model-based engineering and a set of methods and techniques to reach those goals.
Analyzing the toolbox. Where resources to implement MBE are scarce, using existing and low-effort development of new tools matters. Here goes the discussion for cloud, UX and user complexity, self-hosted apps and whatnot. We will also discuss how model-based engineering relates to automation and project management.
Data models and the Holy Grail of SSoT. SSoT, achievable or no, is of paramount importance to data model and we are going to redefine it a little, too. Also, ever heard about the death of the filesystem? I doubt it’s that dead after all, at least for the modest effort. We’ll explore possibilities to employ rather low-level, common technologies to empower the engineering of models at the data management level.
Get on board. I actually plan to inevitably screw up a lot in this journey so your input is crucial and will define whether it’s bollocks or not. Over the next month, we’ll explore the model paradigm and the wrap-up will be presented at COFES St. Petersburg this June. Stay tuned at get into discussion at @Twitter and #ModelBased.
Digital corporations sometimes show F60.2 symptoms and what’s worse, they know where you live. Fancy exploring the community backlash?
Autodesk wasn’t the first renowned and praised digital innovator to attack its own loyal user base over the issues of copyright, trademark management, and intellectual property (IP). Another software behemoth Facebook has been doing it for quite some time (example being a social marketing analysis site, facebookru.com, forced to abandon its former domain).
This action doesn’t come as a surprise: brand unification knows no mercy and some would remember the botched overhaul of certain Autodesk user forums.
The current issue has been extinguished as quick as was the uproar against Autodesk Legal team’s faux pas.
Why do we care so much about such things? We are, after all, usually law abiding people and the actions taken by Autodesk appear to be lawful and part of long-term corporate policy, no matter how “unintented” they might be declared. Why this action of Autodesk has been labeled by the society in an unanimously negative voice?
It comes, as it seems, not from our (mis)understanding of law but from our ethic. The engineering/digital creation profession is also a society, and we have an established knowledge about what is acceptable and what is not in the world of brands and copyright. That knowledge didn’t pop out of nowhere, but was forged by centuries of interpersonal professional relations under various contexts, by experiences of individuals having to make decisions on what is bad and what is good.
The particularities of our attitude depend on circumstances. It is, for example, a common knowledge that the post-Soviet engineering community has a much more permissive view towards intellectual property abuse than the Western community. Yet I think that there has always been some sort of a baseline in this attitude, things that are accepted by overwhelming majority of people and which are generally regarded as being “non-profit”. Giving a music CD to your buddy, photocopying a scientific article, running a community website with the site’s product of choice in its name — these actions have never been criminalized by persons of good will. The community is not a law-maker, yet still the implicit standard of ethic comes with actual enforcement: violations lead to reputation damage and eventual shunning by other players in the community.
Now the logic of legal teams that service large incorporated entities is a totally different thing. When you are big enough to have adequate resources, the obscure and outdated IP laws become an instrument to promote your interests, and human interaction on which society has been grounded becomes replaced by a calculated procedure that was deemed to be beneficial and worth implementing.
The legal logic of IP legislation wants us to believe that if the world of society-defined norms does not correspond to the IP laws, then that world is broken and needs to be fixed. This assault on common sense has been surging in the wake of global growth of the digital market, and recent examples of it have become outright comical. In retaliation to this onslaught upon basic human rights, society has been responding with initiatives as radical as the notorious pirate parties, and people who theoretically should benefit from IP laws (like Valve’s Gabe Newell) often regard digital piracy as a service problem. This is the clash of cultures, of worlds, of interests. It is this very real danger to our world-of-acceptable-things — defined, for the most part of it, by good will and common sense — that produces our unease and anger.
As this struggle becomes more and more prevalent, a day will come when one should make a stand. The basic idea is simple. The world of society-defined IP ethic is not broken, but the world of shamelessly exploited IP law is beyond all hope of repair. The abuse of intellectual property law is a malignant thing, and it forces upon the professional society an alien, unjust, Orwellian ethic. A corporate entity that abuses IP law should fancy being called out as a threat to professional society.
First of all, I’d like to thank Inforbix team (Lev, hello!) and specially Oleg for a great opportunity to implement Inforbix in my practice. This implementation is over now. However, I started to “re-think” radically my data management approach when I was using Inforbix.
So, I’m looking forward to the news from Oleg and hoping that some day granularity, semantic search and user-friendly DM will enter AEC-domain…
In the circles of CAD/PLM fancy people, Oleg Shilovitsky (@olegshilovitsky) is a well known expert and visionary, his latest creation being cloud-based Inforbix, arguably one of the few high-profile startups in the old-of-days CAD/PLM business, a search engine and technical information fusion and representation service, marketed as an enterprise-level data management Google of sorts.
Most notable facts are as follows from the Autodesk press release:
1. This transaction is a classic acqui-hire which means at least part of the Inforbix team will likely continue to work in its new incarnation within Autodesk. The most notable human resource is, of course, Oleg himself. A good half of the press release revolves around his charismatic personality and his input in the common state-of-thought in PDM/PLM.
2. Having laid their hands on Inforbix’s technology, Autodesk gains a much-needed leverage in its crusade to convert users to the cloud-based software delivery. Inforbix know-hows, best described as ‘advanced visual Google for diverse and loosely structured enterprise data in MCAD and adjacent sectors’, will be incorporated in Autodesk’s own PLM/PDM leviathan, the PLM 360.
It is refreshing to see how even a much-hyped software coming from the biggest vendor sometimes needs an infusion of technology from relatively minor but more advanced developers.
But wait, there is more in this news than a plain old boring press release.
The fact is, this hire comes with a patent that at least in part defines the idea behind the Inforbix engine. Oleg is author of the user interface-centered patent named ‘Method and system for fusing data’ (http://www.freshpatents.com/-dt20120510ptan20120117093.php) that details the mechanics of Inforbix’s procedure and snippets that form its output. This intellectual property will apparently join Autodesk’s patent pool. This is important because today’s IT development world is basically a patent war hell, and any decent patent may and will be wielded as a potential weapon. I have already mentioned on Twitter how Autodesk is bolting after other large companies (who we watch gradually transforming into patent trolls with frightening speed) to secure its defenses in intellectual rights.
What is my conclusion? As someone who found much interest and pleasure communicating with Oleg, I am amused to see his visionary writings (see blog, for example) not only contributing to the greater good of professional CAD/PLM communication, but actually influencing Autodesk’s strategic investments. I only hope that the new employer will not affect Oleg’s commitment to participation in the professional dialog and his exceptional role in Russian-speaking CAD community as one of those liaising it to the big and rich English-speaking CAD crowd.
We are used to live in a time when big CAD vendors are far away from the end user. For instance I’m always trying to personalize the vendor’s product and find some mental relationship with its developer who conceived it. That helps me to feel more comfortable, more confident. The ruskinian ennui on something handmade, local and personalized becomes more and more evident in modern architectural discourse. For instance, check the recent book by Lars Spuybroek. How about the CAD vendors who provide new tooling for our digital architecture?
I was really impressed by Blake Courter’s works that are tightly connected to art, math and software – installations, fabrication, scripting. Sometimes it sounds like buzzwords but not in this case.
So I decided to get some comments from the author. Blake Courter is a co-founder of SpaceClaim – 3D CAD direct modeling software. The main SpaceClaim features are fascinating, but let’s focus on some emerging “art” features that Blake explores in its own product. Architects who use scripting (coding) as an creative device should appreciate this Q & A session. And thanks to Blake’s detailed answer we get some precious, insightful reflection on his work.
Q & A [but looks more like an essay]
Evgeny Shirinyan: We architects have always been interested in the relation between mathematics and art. How would you characterize your works?
Blake Courter: I’ve always loved math and art. I’ve also been fascinated with subjective questions such as “what is art?” or “what is math?”, and at times I’ve tried to explore those boundaries. If there’s any guiding theme, it has been to follow my curiosity. Everything else has been a side effect.
For example, in the past few years I created a series of developable shapes, pushing the limits of what could be made by folding flat patterns into 3D forms. At SpaceClaim, we were just starting to create an API, and I was very excited to try it out. I wanted to be able to rapid prototype using paper, so I started with a two-phased approach. The first was to discretize the surfaces into a triangle mesh using code similar to how one would generate an STL. Then I wrote code to unfold the planar faces into a flat pattern. I produced a holiday card that was a kit to build one of our demo parts, the nose cone from a ‘78 VW Beetle.
I later made a silly video featuring the Penrose “impossible” triangle, which is always a fun thing to model in 3D. (http://www.youtube.com/watch?v=0POQYrwHtG8). I ran a more precisely swept version through the developable code and, to my amazement, discovered that it could be unfolded out of fewer pieces than I expected. The colored faces can be flattened to the following shape, where the blue line down the middle is a crease:
This was a eureka moment. I then realized that any model that is locally a cone (technically, has zero Gaussian curvature) is developable. Also, you can cut a developable surface anywhere with a plane, mirror one half, and get another developable surface. I then wrote a special loft command that could make a tessellated developable surface between any two curves. All of a sudden, I had a little toolbox that could do things I never imagined.
I knew I hadn’t really discovered anything new. There is great software that does this stuff very well, without resorting to faceting, such as AeroHydro’s MultiSurf, which is used in ship and aerospace design. However, I saw the potential for doing things that I didn’t think anyone had done before. I felt like I was standing at the base of a mountain that no one had climbed before. Origami is incredibly well-studied, and I had found this anti-origami that seemed full of possibilities. I think there’s something about human nature that makes us want to explore uncharted territory, and one of the most difficult things in life is to do something that you feel like you can call your own. So I began the journey.
I decided to set about making a plastic version of the triangle with tabs that would assist the illusion. I went to Ponoko and ordered some flat patterns. The result worked out very well:
Then I decided to make a Klein bottle that was symmetric. The result was not a Klein bottle, but I liked the shape:
After that model, I got tired of using plastic rivets and started playing with interlocking tabs. I’ll fast forward a bit, but at COFES, in the middle of a debate about the role of simulation and CAD, I met Daniel Piker, the mastermind behind Kangaroo for Rhino: (http://www.grasshopper3d.com/group/kangaroo)
[Daniel's Kangaroo has had a great influence on the digital studies in architecture. Last summer Daniel visited the Branchpoint workshop as a tutor at Strelka Institute in Moscow - ES]
Daniel, an architect, showed me some of his own explorations, including the Lawson Klein bottle and his work with conformal maps. I was amazed how much math seemed to be standard conversation in architecture. I felt a little left behind.
The Lawson Klein was exactly what I had been looking for, but it was too hard for me at first, so I started with a figure-eight Klein bottle as I refined my tab designs and code. I realized that I wanted the tabs — the fasteners — to become part of the art, like the way that dovetails are used in fine woodworking. So I wrote code that treated the tabs as automata so they could evolve to be perfect. I spent perhaps a full month trying to get the tabs to converge. They never did, but they got within manufacturing tolerance, so I called it done. Then, assembling it was so difficult that I gave up twice. Of the developable series [check all the installations here - ES], it appears to be everybody’s favorite, including my own:
With every piece, I developed more and more automation. One of SpaceClaim’s partners (Marinus Meijers), who now makes a specialized version of SpaceClaim for shipbuilding, introduced me to Aptia MyNesting that would do a nest for $5 a shot (http://www.aptiasolutions.com/). I wrote a special SpaceClaim exporter that would use the right colors for Ponoko, and the Ponoko support team went way out of their way to help me make sure my output would work with their Personal Factory pipeline. The Lawson Klein had so much detail that needed to automate every bit of modeling. Although previous designs involved some amount of hand modeling in SpaceClaim, with the Lawson Klein, nothing touched the model but code.
This result interested me in two ways. I was proud to have developed an application to turn arbitrary surfaces into sculpture. I thought it was an incredible demonstration of the SpaceClaim API that could apply to any specialized manufacturing process. Although I try to separate personal projects from work, my colleagues at SpaceClaim asked me to put an API webinar together showing it off. (I was actually very nervous about this mixture of work and fun, which is obvious if you watch the webinar. (http://www.spaceclaim.com/en/Mkting/API-webinar-recorded.aspx)
The other result was more sublime. Because I had written every line of API code that was involved, the Lawson Klein felt completely hand-made, even though it was machine-created. For some reason this aspect is very important to me. Is the art the shape or the code? I don’t really know.
Somewhere in all this, I joined a collective with industrial space in Boston called Redtail. We’re a bunch of makers, musicians, and artists who share tools and sometimes collaborate on projects. Someone organizing an art exhibit as an adjunct to a local electronic music festival (http://togetherboston.com/) was passing through and asked if I’d be interested in showing my work. I had never even thought of my projects as even art, really. It was just my journey up the mountain. Obviously, I was flattered and thrilled. Then other folks saw my work and asked me to show them at other events. All of a sudden, I was considered an artist. It was weird, but I decided to embrace it.
I wanted to say all that to be clear that this was all an accident. I never set out to make art. I was just a curious explorer who has been inspired by many amazing people I met through my profession and personal life. Frankly, I know so little about art I don’t think I really deserve to call myself an “artist”, and I am a rank amateur mathematician. Whatever you want to call it, I’m having a lot of fun.
ES: SpaceClaim is one of the main software packages which delivers direct modeling and makes modeling an intuitive and user-friendly process. Would you say your interest/hobby has an impact on the development of SpaceClaim itself?
BC: I got into the CAD business right out of college because I loved design, machining, and computer graphics. I got the ball rolling with SpaceClaim because I wanted to make a mechanical CAD system that would much easier to use than feature-based CAD. I grew up with a workshop at home (my father and brother are both extremely talented woodworkers) and I wanted to make CAD that a more hands-on audience could use. It’s fair to say that my upbringing might have had an impact there.
But as you can tell from the story, my art came out of access to SpaceClaim, not the other way around. I like to think that the right tool can enable someone to do something new and amazing, and I think that SpaceClaim is one of those tools. For me, it was hands-on, interactive geometry and the API that enabled this art.
ES: You are developing some of your installations in SpaceClaim using custom code (e.g. Developable). What does coding mean for you when you model a form?
For me, the code is inseparable from the art, so I can’t imagine living without it. In fact, the API was a bit of a gateway for me. I was not one of the developers of SpaceClaim. In the earliest days, I was the idea and business guy. I partnered with David Taylor, one of the worlds greatest CAD architects, to make a prototype, and it would have been senseless for me to interfere with his incredibly fast development. I recruited Mike Payne, a founder of PTC and SolidWorks, to build and launch the actual product. When we shipped, it made sense for me to move to sales and marketing, and it was only because our API was so accessible that I was able to get off the ground with these projects.
Over time, I became a better and better software engineer. Sometimes I would ask David to look at my code to see if I was doing anything stupid. Eventually, it got to the point where the only things he would change were my naming conventions. That was a milestone.
This winter, I was reading the amazingly accessible “Visual Complex Analysis” by Tristan Needham, further developing my math skills while looking for inspiration for my next project. To get anywhere, I would need a math library for the extended complex plane with generalized line-circles and Mobius transformations — something not found in CAD APIs. I’d been using C# .NET, which is a beautiful and powerful environment, so I decided to start from scratch with C# and Monodevelop on my home Ubuntu box so it could be cross platform. For the first time, I created an empty project and just started typing. I had the basic Kaleidoscope math working in little more than a weekend. Then I spent a few more weekends making is beautiful and fast. Finally, I showed it to some friends and they asked me to exhibit it at this year’s Together Festival, so I slapped a joystick on it to make it interactive. I showed it for a second time last night. It’s perfect for nightclub events. People really enjoy navigating hyperbolic space.
Coding probably isn’t for everyone, but now I write scraps of code on a daily basis. Now I can’t imagine living without the ability to programmatically create shape.
[Mark Burry is absolutely right about scripting as a thinking culture. See Scripting Cultures - worth to read it - ES]
By the way, the code for the kaleidoscope is available if anyone wants to play with it. Not everything is perfect yet, and there are parts that are still pretty hacky. If anyone has any tips on how I could make it better, please say so!
(For your Russian audience, I should point out that the Poincare Disc is a model of what we usually call hyperbolic geometry, also known as Lobachevskian geometry after the Russian mathematician who did pioneering work in the field.)
ES: Could you put some comments on your works (Developable, Autosub Dome, Poincare Kaleidoscope). What the most important challenges you have met?
BC: The Autosub Dome was perhaps the biggest challenge, and perhaps it would be the most interesting to architects. Although I was the designer and engineer, it was a team project to make a performance space for of aerialists and musicians who go to the amazing and inspiring Burning Man festival. The aerialists didn’t want a geodesic dome, because they have problems when heavily loaded from one vertex. Also, the aerialists needed a certain amount of height, but the space available wouldn’t have allowed for a hemisphere that size. The design concept was inspired by the Gherkin in London http://en.wikipedia.org/wiki/30_St_Mary_Axe, which I later learned is a diagrid construction.
I found myself designing a new kind of dome from first principles, with only textbook knowledge of structural engineering. If it wasn’t strong enough, it could break and my friends could die. Everybody was contributing their own money to the project, so it had to be inexpensive. There were many people involved with no clear leadership. Oh, and most of the team who was going to help fabricate it hadn’t ever worked metal before. Those were the challenges.
I had access to ANSYS, which is incredibly powerful simulation software. It appeared to work great, but I had no way to know how accurate my results were. I needed a second opinion, so I analyzed it by hand. But how the heck do you do that with so many funny angles? Well, I came up with a technique of using CAD to calculate basic reaction forces. You can do it in any 3D system, so I’ll share. It was shockingly easy.
Basically, I knew the forces that would be at each vertex. Starting at the top, I drew triangles to figure out how gravity would translate into compression. For example:
The red triangle is the force triangle for one of the diagonal beams. The bright red edge is vertical and represents the load on the vertex. It’s useful to make it a convenient length (say 1m). Therefore, if I know the downward force at the vertex, the reaction force is that load times the length green edge (divided by 1m and divided by two because there are two beams going down). Working my way down the dome, I could simply write these values into Excel (via an add-in, naturally) and choose beams that gave me the right factor of safety without buckling.
[Structural engineering is cute. Never managed to include structural studies in our design. - ES]
I set up a production line at a friend’s basement and back yard, training people at different stations how to do each job. That took a weekend. Then we set aside the next weekend to assemble it. My plan for assembly from the bottom up didn’t work at all, and some guys just started building it from the top down, with others lifting each level. It warped and twisted, but when all the pieces were in place it was perfect. Then we tested the top vertex, which was supposed to hold a few thousand kilograms for the aerialists’ rigging. It inverted with the weight of merely two people. Failure. My first-order analysis was clearly wrong-headed, and the tolerances to which we machined it were way too sloppy. All these people had put their money and time into my engineering, which didn’t look so good right then.
[That reminded me our fabrication process at PARALAB, but we are more low-tech guys. - ES]
It was all fine in the end. We rigged it to the next set of vertices down, which lost a little height but was super strong. It has been to Burning Man three times and has been assembled twice for events in Boston. This year, we are planning on raising it up two levels, which should further accentuate the ellipsoid shape. There are bunch of pictures here, including some documentation and a visual assembly guide.
Autosub Dome construction
Oh, I’ll end with one more thought. With the diagrid design, it was clear than the diagonal members would be in compression and the horizontal ones in tension. That means we could use much thinner material for the horizontal pieces. Geodesic domes are beautiful, but they don’t usually take into account gravity and forces with materials planning. So I’ve been telling everybody that this dome is greener than geodesics because it makes more efficient use of material. I’ll leave it to you architects to figure out whether that is actually true!
ES: Blake, thank you for such a detailed commentary! I think you can start to write a book!
I do think that such a personal activity of a CAD developer along with the creative approach are very precious, not only for the end customers but to the digital culture itself. I really liked this Blake’s passage:
“Because I had written every line of API code that was involved, the Lawson Klein felt completely hand-made, even though it was machine-created. For some reason this aspect is very important to me. Is the art the shape or the code? I don’t really know.“
This video from Blake is really amazing. Aggregate your tweets in real time inside Spaceclaim!
Authored by Alexander Bausk, live from Scottsdale, Arizona.
Today is the first day of the intensely insightful COFES congress on the future of engineering software. Hello, I am Alex, officially the most socially awkward guy on these premises.
Today I’ve been connecting with what is possibly the most industry-wise diverse crowd in my expertise, ever. Shamelessly excellent sessions were given by Inforbix’s Vic Sanchez (basically about mining data in a structured, deep-looking manner from a logically diverse design environment) and Michelle Baucher – not many people on this “single source of truth vs. federated data storage” but it proved incredibly insightful to me.
Appearance of Alan Kay at the keynote speech and provided a material for thinking for a long time to come. I will detail on this as soon as I get my hands on a real PC, not this heap of outdated technology.
My most important goal on COFES is largely complete: there is enough evidence for astronomically important innovation in AEC and technical authoring in general. Stay tuned for details!
Please refer to the Writandraw COFES workpage for quick access to structured information about COFES and Writandraw involvement in it.
If you’re on COFES right now, my US phone is 407-668-69-17, feel free to find me and chat about workflow and data automation, AEC in general and structural analysis in particular.
I’d like to share some thoughts on my experience in using Inforbix. Actually Inforbix is developed for MCAD-applications, and at a glance AEC domain isn’t relevant here. But some of my considerations of possible solution for small bureau design process were rather succesful…
Well, I’m not a big fan of “modernist” Autodesk Vault or something else. Multi-CAD zoo – the friend of mine. I’m an interior designer – cafes, restaurants, flats. Each project folder consists a huge mess of files and (even structured) it’s hard to manage it. A typical project includes many products such as lighting, sanitary, furniture, and that multi-format data (it becomes “multi” when you simply download the product files on the manufacturer’s site) is disseminated through different locations. Data duplications, different file formats etc, and all that should be reused. In my humble opinion.
According to Inforbix developers and their brilliant videos on Youtube, Inforbix manages huge mass of disparate files, performs a deep data extraction and links relevant pieces of data with semantic mechanisms. Semantic linkages provide the context, hence, the information and then knowledge emerge. Inforbix ideology relies on non-hierarchical principles and data granularity – very promising way to manage and diagram information complexity.
First of all, I collected a certain number of project folders (each project folder consists of 1000 files circa), deleted unnecessary files and started to explore that content via Inforbix. Below I posted some higlights of that data exploration. Frankly speaking, I was mainly focused on the SketchUp workflow because of its similarity with assemblies.
Start to slice all the data by category. SketchUp Components and AutoCAD blocks are the most numerous categories
All files on my drive by category
The component list in project folder #062. Done by narrowing the search – from more than 11000 items to 48 items
All the SketchUp components that are located in the project folder #062
Here is the amazing diagram: my students performed the same task and modeled the terrain in SketchUp. But everyone did it in his/her way. Some of them did too large models…
SketchUp Documents - the same task, different students
All the AutoCAD blocks related to “heating”. It’s evident the same blocks are located in different files. You can switch form Table view to Search view and navigate through Inforbix snippets to the specific files.
All the AutoCAD blocks related to "heating"
Well, and here is the short overview of my experiment
My "customer" video - click it
Hence, Inforbix can “slice and dice” the data of your bureau. What sort of conclusion can I do?
First of all, data granularity – that’s we, architects, really need. Naming, metadata, “good” content (no mess inside the file) – all that helps to quickly find the data. So here bottom-up approach works well. “Good” content – easy search.
Another point in this way of data management is related to select right filters to narrow the search results and to diagram the data in the meaningful charts. And here, I suppose, Inforbix will develop further diagramming tools. First – create “good” content, second – apply right filters.
Inforbix is a bit geeky for an architect. We live in a world where information goes via Google. One click – and the answer is shown on a first page. You don’t need to filter something. Inforbix’ approach assumes that you will narrow the search and experiment with reports. Sometimes I forget about the “Search within results” option. Such a sophisticated way to search and sort the data is a bit unfamiliar to an architect or a designer. It’s critical to understand the metadata importance, and if you know what “Is Internal” option (SketchUp component attribute) means, it’s possible to perform a kind of “deep” CAD-management. For example, I search all the SketchUp components that have “Is Internal” value as “false” (that means the component is imported from “outside”), I add a “Path” column in the report, and thus, I can get all the component locations on my drive.
Finally, after “slicing” and structuring your data, some sort of knowledge should emerge. However, there are certain nuances. For instance, I gather in Inforbix Table a data set, then I’d like to manage the items in an arbitrary way – and here Inforbix gets “rigid”. It seems to me that Inforbix Docs will solve that issue. Tags, virtual folders become very handy and can provide a good basis for a further data reuse – as we can see in Picasa’s case.
On one hand, we have a complex data sets and file features, on the other hand – human activities, represented by “Last Saved By”, “Author”, “Comments” etc. The biggest challenge, I suppose, is to connect in a meaningful way these two realms.