Weekend recap
2009-09-21 17:14:33.792444+00 by
Dan Lyke
20 comments
Charlene and I went camping and stargazing up at Liberty Glen near Lake Sonoma.
Yesterday we came home, and I did the usual household stuff, and then I helped her write some summaries of the lives of famous astronomers.
I went to elementary school in The Hawthorne Valley Waldorf School in Harlemville New York. Years later, I got in to reading Steve Talbott's NetFuture newsletter, at some point he said "if you like this, consider sending some money to The Nature Institute, so I did, and then as they started sending me newsletters I realized that they were a part of the community around the school in which I grew up.
One of the things that struck me about the history of astronomy as Charlene and I went through the various ones last night was how long a bad idea could persist. Wrong pronouncements on how the world should be by philosophers like Aristotle, with data fudged to fit their models, managed to trump millenia of observations to the contrary. Aristarchus pointed out that the solar system was probably heliocentric, and that the reason we didn't observe parallax in the stars was that they were far enough away that he didn't have the tools to see shifts that small, and was widely quoted by others as saying this, but we still wait fifteen hundred years to credit Kepler, Copernicus and Galileo with this observation, because they, particularly Galileo, were strong enoguh to stand up to the established thinking and say "no".
And, of course, Galileo got taken to task by his contemporaries as much for the fact that he was downright nasty about calling them ignorant as he did for doing so in the first place. The whine of the times was "well, if he hadn't hurt our feelings...".
Anyway, this ties in to The Nature Institute and my elementary school experience because those folks are proponents of something called "Goethean Science". In getting this quick overview of astronomical history, I'm getting a good reminder of why we should start with reality and work forward, rather than starting with a philosopher's ideas and work backwards.
[ related topics:
Children and growing up Quotes Interactive Drama Photography Technology and Culture Nature and environment Invention and Design Space & Astronomy Astronomy Work, productivity and environment Community Currency New York Philosophy Gambling Photovoltaics
]
comments in ascending chronological order (reverse):
#Comment Re: made: 2009-09-21 18:06:50.330461+00 by:
ebradway
Did you snap that shot of the Milky Way with just a long exposure? Any
filtering? I'm going to Canyonlands in Utah next week. Might be able to pull off
some more.
I like the shot of the rattle snake. Pretty little critter (in a picture, not
under my sleeping bag ;)
As for philosopher's ideas and reality - you have to realize that we are always
working with representations of reality - not reality itself. As soon as we
simplfy reality to "laws" like E=mc^2 or F=ma, we are substituting
representations for reality but often make the mistake of equating the
representation with reality. This happens so naturally with cartography that
it's only just recently (past 20-30 years) been realized how much of a problem
it is. The map is not the territory...
However, the map (or representation like F=ma), creates a territory. Humans are
territorial by nature. We ascribe a value to or favored representations and
defend that territory viciously. Whether the territorial lines are drawn between
classical vs. quantum mechanics, Christianity vs. Islam, Pakistan vs. India,
Mets vs. Yankees, it's our nature to fight to maintain (or extend) our
territory. It's not in our nature to dissolve a territory for sake of another,
no matter what evidence is behind it.
That's just being human...
#Comment Re: made: 2009-09-21 18:32:02.518959+00 by:
Dan Lyke
There are two shots there, one is just ISO 1600, F2.8 and 15 seconds, the other is a composite of 6 of those, just added to each other (Copy and paste them all into layers in Gimp, set the layer mode to additive, shift 'em around 'til the stars line up). Nothing cosmic, I really need to take the SLR and a big heavy tripod next time.
If you've got a digital SLR or a smarter point-n-shoot with bulb mode you should be able to do star trails and everything.
#Comment Re: made: 2009-09-21 19:37:12.392819+00 by:
Dan Lyke
Eric, I'm (obviously) having trouble refining the ideas that this exercise is bringing up, but I'm getting a sense of how much we lose because we seek aesthetically attractive solutions. Galileo was roundly criticized for his lack of politeness. Two thousand years of deification of Aristotle's praise of the beauty of the spheres pounded down those who could see.
Makes a romantic wanna turn positively punk sometimes.
#Comment Re: made: 2009-09-21 21:21:07.537247+00 by:
ebradway
Aesthetically attractive solutions are a big part of the problem. Everyone wants
to come up with simple rules: 1=1, the sky is blue, the Sun is at the center of
the Solar System, the Earth is round.
Of these statement, 1=1 is probably the only definite but that's because it's
entirely a human construct. Math is entirely arbitrary - because it is
arbitrary, it can be perfectly consistent internally.
Is the Sun at the center of the solar system? Well, the planets to revolve
around it - but their path is not circular, so there is no true "center". The
same problem exists with the round Earth. But maybe we can blame the error here
on semantics. We assume, when we talk about "center" or "round", we are dealing
with circles or spheres. In contrast to a flat earth or earth-centered universe,
the failed model is an improvement.
Reality is very, very messy. The closer you get to reality, the less
abstraction, the more things seem messy. In cartography, we like to make maps at
certain scales because geographic phenomena can be represented in the least
messy manner at that scale.
#Comment Re: made: 2009-09-21 22:24:03.694629+00 by:
Dan Lyke
I admit I giggled at "...because it is arbitrary, it can be perfectly consistent internally."
There was... uh... this guy named Kurt Gödel... and... uh...
I mean beyond his formal proofs, just the way that each new operator in math leads to a new class of numbers kind of undermines that statement. You start with positive integers, throw in addition, but that needs an inverse operator, so you add subtraction, and that means you have to have negative numbers, and addition really needs repetition, so you add multiplication, which leads to addition, which leads to rational numbers, but multiplication also leads to exponents, which leads to real numbers, which leads to imaginary numbers, and... well...
There's always a little inconsistent edge in math which tears open a new dimension when you start to explore it.
And "1 = 1" is definite only because we've said that the first "1" is actually a dereference to a concept that's also pointed to by the second "1"...
And, yeah, it's not like the Sun is actually "the center" of the solar system, everything is an abstraction, but many of the breakthroughs which achieved popular recognition after the age of Kepler and Galileo were being published and toyed with BCE.
#Comment Re: made: 2009-09-22 20:12:44.679288+00 by:
ebradway
Yes, Real number math has inconsistencies - infinity, 1/0, irrationals, etc. -
but math also has constructs that are much simpler. 1 = 1 or identity (or better
put x * 1 = x, where there's an operation, *, and a value, 1, where that
statement is true) is the kind of internal consistency of abstraction that math
depends on. Given x * 1 = x, you can substitute x * 1 for x. You commonly also
have an x + 0 = x, which leads to x * 1 is a perfect representation for x + 0.
Outside of math, representations are far from perfect. Territories develop
around the imperfections in those representations - especially in the name of
elegance. Ever have a particularly elegant piece of code that, no matter how you
twisted it, didn't quite fit the problem it was supposed to solve? Especially if
you weren't given a good idea of what the problem was in the first place... Only
to fight to retain the elegance rather than throw it out?
The history of science is as much a matter of personalities and sociology as it
is understanding reality.
#Comment Re: made: 2009-09-22 21:22:25.416642+00 by:
Dan Lyke
Agreed. And I think this also ties in to some realizations I've had about spiritual paths and fine dining recently.
So, this brings me to a question I've been meaning to ask you for a while...
My impression of postmodernism is derived from silly idiocies from Jacques Derrida and giggling over postmodern critical theory with Alan Sokal and snickering when Republicans embrace post-modern political correctness, but I'm more than willing to accept that these are fringe cases, and much like making fun of string theorists ignores that there's a lot of good post-quantum physics happening out there, that there are elements of post-modernism that I should educate myself about.
So, have you got a suggestion for a good intro to post-modern theory, something past The Structure of Scientific Revolutions, that'd be a good read?
#Comment Re: made: 2009-09-23 03:15:12.89818+00 by:
concept14
I found Postmodernism for Beginners to be a good start, but maybe you want something for intermediates rather than beginners.
#Comment Re: made: 2009-09-23 03:15:31.538894+00 by:
concept14
I meant to link to http://www.amazon.com/Postmode...F8&s=books&qid=1253676112&sr=8-1
#Comment Re: made: 2009-09-23 07:02:15.473762+00 by:
ebradway
<grin> One of my favorite questions for serious post-modernists is "what is the
difference between epistemology and paradigms?"
I think you may have answered the question above. Paradigms are an abstraction
about a theory - or maybe they are more deeply ingrained theories. Epistemology
deals with the way individuals understand and interact with the world.
I'll have to look around for a good place to start. Post-modernism is best
approached from an application rather than pure theory. It is emergent rather than
constructed.
#Comment Re: made: 2009-09-23 07:44:10.755207+00 by:
ebradway
This
book from Blackwell looks like a good primer. Half the book is modern
philosophy - most of which you've probably already read (including Kuhn). The
second half covers post-modernism. I see Derrida and Foucault. I also like Donna Haraway - who might
provide an interesting place to start, especially since her seminal "Cyborg
Manifesto" is available
online.
#Comment Re: made: 2009-09-23 11:43:01.106735+00 by:
topspin
Outside of math, representations are far from perfect.
Outside of math, nothing is written in stone.
Inside of math, everyone's a geologist.
When it comes to philosophy, I prefer Groucho as a starting point. :-)
#Comment Re: made: 2009-09-23 15:45:41.072315+00 by:
Dan Lyke
Last night I checked messages and slept on "the difference between epistemology and paradigm", thinking "well, that's obvious, no?", but couldn't make the difference twitter-sized succinct. This morning I re-read your message and... yeah, the difference is constructed versus emergent. At least in my usage of the words.
I will start with the Haraway essay, although that Postmodernism for Beginners looks like fun!
Topspin, yeah, I'm a Marxist now. I've kind of dropped the Neilism, mostly because he went a little too much into the noise-rock for my tastes.
#Comment Re: made: 2009-09-23 16:00:59.017943+00 by:
Dan Lyke
(And now I'm reading Haraway and I'm realizing that declarative sentences without explanation or example written in breathless Dan Brown-like purple prose are part of what drove me to mock postmodernism... but I'll muddle through.)
#Comment Re: made: 2009-09-23 16:44:58.617222+00 by:
ebradway
Another good place to start. especially for you, is Hubert Dreyfus' What
Computers Still Can't Do. While it's not directly a post-modern text,
it does utilize some of the philosophical foundations of post-modernism in it's
critique of the AI movement.
I glazed over the Slate article about Slokal again. It appears that he's simply
reiterating the classic debate between the continental and analytic
schools of philosophy.
Personally, I've been trying to learn how to write in the continental style. I
find the analytic style almost too easy - rather formulaic and derivative (pun
not intended). Writing a convincing argument in the continental style requires
greater skill.
#Comment Re: made: 2009-09-23 19:50:39.284531+00 by:
Dan Lyke
Yeah, I read Dreyfus back in the '80s, I remember dismissing him for some of his arguments, but I don't remember the details.
The problem I have with the Haraway writing style(is this what you mean by "the continental style"?) is that it ends up reading like Joyce, or like Robert Anton Wilson's parody of Joyce. You have these definitions that get introduced, they're not how you'd normally define the word, but if you read through the whole document linearly the last page has a completely different meaning than if you read it in isolation. Great for experimental fiction, and I remember rolling on the floor laughing at whatever meaning the "miraculous flying Rehnquist" eventually had, but only reasonable for communicating the idea if what you're trying to communicate is that fairly simple concept about the limits of mutability of language.
However, trying to dissect the document, or to talk with other people about it, quickly (and I believe deliberately) runs into the whole "language has no meaning because it's all metaphor" fallacy. A cute stunt, not great for communicating ideas.
#Comment Re: made: 2009-09-23 22:15:45.164095+00 by:
ebradway
Haraway's style, in the Cyborg Manifesto, is not really what I mean - it is
rather obtuse. Dreyfus released a new edition in 1992 - the "still" part added
to the title. I kind of wish he'd make a new revision.
Probably a major divide between analytic and continental philosophy is that the
analytics try to distance science from the scientist. Whereas the continentals
look at science as a human construction subject to all the "flaws" that are
endemic in any human endeavor: territoriality, misrepresentation,
miscommunication, etc. This is why post-modernists sometimes come off as
attacking the person and not the idea - they do not isolate the two.
My best definition of post-modernism is simply that distancing science from the
scientist does not always yield the best results. Results which are not rooted
in science can be just as valid as results generated by science. We can see this
in the arguments against GMO crops. By all means, GMO crops produce greater
yields. But it's impossible to separate the benefit of that scientific result
form the negatives of IP law enforcement by Monsanto and contamination by GMO
crops of even the most remote fields in North America. You can also argue that
the scientists at the USDA who declared GMO crops safe are influenced by the
likes of Monsanto. A "continental" argument could be made against GMO crops
without having to resort to science to establish argument.
This is essentially what Dreyfus does with early AI research. So much of early
AI predictions were made on extremely synthetic examples in research. An AI
project may have successfully managed tic-tac-toe and then claim that managing
chess is just a simple extension of the same ideas. Deep Blue beating Kasparov
is one of the reasons I wish Dreyfus would revisit the thesis.
The other reason is Google. I believe that Google is making Dreyfus' argument
for him. They've launched probably the most powerful computer system in the
world and used the most advanced methods to attempt to make sense of human
communication. Where they succeed is in producing a predicable result that
humans can leverage. But I believe that, given current computing paradigms,
machine reasoning is asymptotic at around 60% of "normal" human understanding.
If Google establishes an asymptotic limit to machine reasoning, then it's easy
to see that the modernist model of human reasoning is essentially flawed. That
human reasoning, itself, cannot be deconstructed and recreated by science. Which
is Dreyfus' main point.
#Comment Re: made: 2009-09-23 23:00:57.31985+00 by:
Dan Lyke
Ahhh, yes, that's bringing back memories of why I scoffed at Dreyfus... on the "Google peaking out at 60% of human understanding" thing, coincidentally I just closed this browser tab: Everything Sysadmin: The internet is just getting started, which, among other things, points out that Google is only 10 years old, less than 25% of the age of the internet.
#Comment Re: made: 2009-09-24 04:01:53.192798+00 by:
ebradway
Someone beat me to the punch - pointing out that Webcrawler and Altavista provided
web searching before Google. Same with Flash predating AJAX. But that was a fun
read and the video from the SF Examiner was really fun to watch!
But it's not Google's relationship to the Internet that I'm talking about. It's
Google's relationship to artificial intelligence. AI is the ultimate modernist
fantasy. Creating a machine that thinks - encapsulating human cognition in code
and electronics. Dreyfus tries to point out that it's really just a conceit. All
of the attempts to simplify human cognition to the degree that it can be
implemented leave out more than they include.
Maybe one day we'll manage to create a thinking machine. Cogito ergo sum.
Then we can ask the computer what it thinks of post modernism!
#Comment Re: made: 2009-09-24 21:51:11.496031+00 by:
ebradway
Had to share this - from a paper I'm reading for my fuzzy sets class:
"As far as the laws of mathematics refer to reality, they are not
certain, and as far as they are certain, they do not refer to
reality."
That was Albert Einstein as quoted by M. Black 1937. And it's exactly the point I
was trying to make about math being internally consistent solely because it's an
abstract from reality.