Mark Dominus on Sat, 28 Jun 2003 12:25:04 -0400 |
> What about global variables, file scoped variables, function prototypes > (ala C/C++) and such? Would each have its own wiki page? I'm not sure; I think it would have to be tried. But I do have some general thoughts. 1. Within reason, it should be possible to put code on one page or on several pages. One of the main benefits of a wiki is that when a page gets too complicated, it's easy to split its content across several subpages. 2. In general, it seems safer to me to say that 3. The mapping from pages to files shouldn't be important. Ultimately, new programming languages will arise in which the wiki is itself the source code, and there isn't any 'intermediate representation' in the form of code files in the filesystem. That fact that languages like C and Perl depend on files, and that these files affect the semantics of the language, shouldn't be important. Of course, as long as people *are* using these languages, the wiki will have to deal with those semantics somehow. But I think it's important to try to abstract away from it as much as possible. 4. The wiki is capable of presenting a more abstract and high-level division of functionality than (for example) C provides with its file scoping. Imagine you have a collection of functions for handling a job queue. These functions have some private methods, that nobody else should be able to call, and some shared private variables which nobody else should be able to see. In C, you can accomplish this by putting all the job queue functions in a single file with the private functions and data in the same file, but labeled 'static'. This has some unfortunate side effects. You can't change and recompile just one job queue function; you have to recompile the whole file. You can't have a function that is in the job queue subsystem *and* in the web client subsystem, because the stuff the web client functions need is all off in a different file. At that point you have to make the web client stuff global and let the job queue functions see it, or vice versa, or else you need to make them both global and let everyone see all of it. The wiki could represent this sort of privacy and sharing information explicitly, instead of depending on the same file / different file convention. A page of code could have an associated attribute that says "Code on this page is part of the job queue subsystem" or "Code on this page is private to the job queue subsystem." Then when the wiki was turned into code, the translation would use whatever features of the underlying language were necessary to implement the sharing restrictions that you had set up, putting things in separate files, or including #include or extern declarations or whatever the language supports. Even if the underlying language isn't strong enough to support the kind of sharing restrictions that the programmers want, the Wiki could be made to do it for them. For example, it could notice that function X is making use of the 'examine_job_queue' function, and raise a complaint: "'examine_job_queue' is private to the job queue subsystem, and 'X' isn't in that subsystem" regardless of whether there was a way to produce code in the underlying source language that would have caused the compiler to emit an analogous message. > > Function calls would be hotlinked to their corresponding pages in the > > wiki. Documenation would be more wiki pages; comments in the source > > code might link to other wiki pages. > > I wonder how this would compare to Knuth's WEB and Literate Programming. > Would this be Super-Literate Programming? ;-) I sure hope so. I've always wondered that literate programming didn't take off more than it has; I think it's a terrifically good idea. I think part of the problem is (a) existing programming tools don't support it, and (b) programmers haven't been trained to do it. But now we have wikis, and people already know how to use wikis, so there's already a base of people prepared to do literate programming in wikis. > > Anyone who wanted to make a change would do it. After each change, > > the code would be automatically extracted from the wiki and rebuilt > > and the automated tests (other wiki pages) would be run. If a change > > resulted in new test failures, the test failures report would be > > automatically added to the wiki, as an annotation to the change that > > caused them. > > Here we run into a serious security issue. It's serious, but every large software project has this security issue already. Lots of people al over the world run automatic tests of Perl on hundreds of machines every time a change is published. How do we prevent those machines from losing all their files or turning into spam machines? We have a cabal of people who are empowered to accept or reject changes. For wikili`, I imagine that the wiki would track which changes had been blessed by the cabal, and the test and snapshot versions of the code would contain blessed changes. Similarly, consider a big company like Microsoft. How do they prevent a programmer from putting code into Microsoft Word 1.3 that trashes their test machines or turns them into spam machines? There are two answers. (1) Other employees review the changes. (2) Anyone caught doing anything naughty like that will be fired and/or sued. (1) translates to a blessing scheme like the one Perl has. (2) requires no translation. Not every problem requires a technical solution. > > Periodically, a snapshot of working code would be taken and packaged > > and a new version would be released. > > Would this be automatic, or would a human decide that a snapshot release > was appropriate? I think packaging is a social matter. For example, packaging decisions are often made on the basis that the big annual conference is next month and we'd like to have the new version out by then even if it's not as perfect as we'd hoped. The computer isn't going to be able to take these sorts of things into account. > Just some other off the cuff thoughts here... > > How difficult of a problem is it to parse source code written in an > arbitrary language out of a wiki's page source? I don't think it matters, because you can always require that source code be tagged, or require that all non-source code be tagged, or something of the sort. > For things like C/C++ header files (and Java source files), it would > probably be useful to be able to specify which file a function or > variable, const, etc goes into. I'm sure this is obvious, but I didn't > really see any talk of this in the thread. That might be necessary, but I'd sure like to get away from the notion that source code must be a set of files. There are real issues here of sharing, visibility, scope, and abstraction, that are not essentialy related to the matter of what goes in what file. Putting code object X into a certain file is a clumsy way to express the desired sharing, visibility, and abstraction properties that you want X to have. It should be possible to think about the underlying properties of code objects at a higher level, and then let the wiki take care of the implementation details. > Could code dependencies be handled by wiliki, or would that be a testing > issue? I'm thinking of #includes, requires, uses, things of that > nature, not to mention linkage requirements for languages that go > through a link stage. Sorry, I don't understand what issue you're raising here. > How long would it take to write a conceptual prototype? Probably not too long, and the world's already full of wiki software. > I don't really know if wiliki is a good idea or not. Me neither. Sometimes these things just don't work out. > Can some existing project be back-ported into a wiliki format for > test purposes, or would an entirely new project have to be created > from scratch? It seems to me that backporting wouldn't work unless most of the current developers were willing to commit to the wikiki` project. Thanks a lot for your comments. - **Majordomo list services provided by PANIX <URL:http://www.panix.com>** **To Unsubscribe, send "unsubscribe phl" to majordomo@lists.pm.org**
|
|