Philophtuisis

Wednesday, February 6th, 2002 02:55 pm
flwyd: (Default)
[personal profile] flwyd
What's always on fire and full of spears?

A dead baby that's on fire and full of spears.

Thus derive the minds of sleep deprived philosophers. That particular nugget comes from Paul, whose ability for semisurreal philosophical and social example based commentary is a highlight of each winter philosophy retreat. I of course wasn't particularly intelligent, so from Thursday night through Sunday morning, I had a combined 14 hours of sleep. The interveniance, however, was stellar.

Friday night we arrived, put food in the oven, and started to make a fire. The organizer discovered that we'd left the hatchet in her car in town, so we found out how many philosophers it took to improvise woodcutting with a leatherman and a fire poker. Mike Heumer gave a presentation, fun as always, about why political discourse is epistemically irrational. This ties into his arguments that people shouldn't vote and that critical thinking is epistemically irrational. We had other fun discussions on Friday evening, but they're rather fuzzy now. Later that night, I got several folks to play Lord of the Fries and Fluxx. They were mostly too tired to figure out what was going on, but they had enough fun to start up a 9 or so player game of Fluxx the next night.

Saturday began with a presentation on Carvakan philosophy, which was conveniently the lecture I attended on Friday morning. We then went our semi-separate ways, some folks cross country skiing, others sitting around the cabin, and the others (like me) went snow shoing. It was a completely gorgeous day, and I managed to figure out how to use my snowshoes such that they only came off once, and I only fell once. We then went sledding, something I've only done once a year in the last few years. With two snow dishes and a piece of cardboard (and later just two snow dishes), we were able to get six people down the hill in one train. We also managed four people on one snow dish, everyone facing in. Big fun. Upon our return, we started the cooking process and entered into more pfun pphilisophical conversations. I had a discussion with Prof. Shields about the Chinese Room, insights of which I will include here.

The argument, as I know understand it, of the Chinese Room is that the Turing Test labels as thinking some things which do not think. Prof. Shields' explanation was that pure syntactic processes are insuficient for consciousness. However, a more clear way to state this is that a program's internals cannot be revealed purely by examining its I/O table.

However, there are a few problems with the details of the experiment. First, after an extended time of using the grammar provided to manipulate symbols, the person in the Chinese Room would know Chinese, in that she understands the grammar, but lacks any linkage between symbols and other representations of what they symbolize. She thus wouldn't be able to translate from English to Chinese, nor would she be able to answer the question "What's this a picture of?" However, in order for the room to pass the Turing Test, it'd need to be able to answer questions like "What color is an apple?" or (if the questioner is smart) "Describe what it feels like to urinate." The latter may be difficult for a computer to answer, and I'm not sure we would want it to. I don't know what it feels like to ovulate, but that's okay. However, the first sort of fact is the sort of thing an intelligent program ought to know about. So not only does the room have rules describing the grammar of chinese, but also an impressive database of facts in Chinese. Now, suppose a person has lacked all sensation of the world from birth, but can communicate via psychic messages. This person is taught English and lots of facts about the world, none of which connect with mental representations aside from those fed to him, since he has no sense experience to build ideas. Surely this person knows English, and when asked a question can think about it in coming up with his answer. But this is just like the Chinese Room I just described, but with English instead of Chinese and without a homunculus.

If I/O is insufficient for determining whether an entity thinks, solipsism is the only sure choice -- all we have to go on in determining whether other people think is their actions.
The Turing Test is inadequate for a different, and much more important reason. Why would we expect an intelligence housed in nonliving material to come up with the same observations as one housed in meat? Surely they will solve many abstract problems in similar ways, but a lot of work would have to go into a computer to know the subjective experience of arthritis, and a person would have trouble experiencing the sorts of things that can be given to a computer. Biological computers are good at some tasks, bad at others. Electronic computers are good and bad at different sets of things. It's not a failure but a feature that computers should be intelligent in a different way, with different approaches than a person. We already know how to make lots of biological intelligences, so why duplicate them?
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting
December 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 2025

Most Popular Tags

Expand Cut Tags

No cut tags
Page generated Tuesday, December 30th, 2025 07:40 pm
Powered by Dreamwidth Studios