A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://mail.python.org/pipermail/python-dev/2001-February.txt below:

=0A= [pid 5072] getpid() =3D 5072=0A= [pid 5072] getpid() =3D 5072=0A= [pid 5072] dup2(3, 0) =3D 0=0A= [pid 5072] dup2(6, 1) =3D 1=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] close(4) =3D 0=0A= [pid 5072] close(5) =3D 0=0A= [pid 5072] close(6) =3D 0=0A= [pid 5072] close(7) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(8) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(9) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(10) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(11) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(12) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(13) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(14) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(15) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(16) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(17) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(18) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(19) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(20) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(21) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(22) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(23) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(24) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(25) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(26) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(27) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(28) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(29) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(30) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(31) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(32) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(33) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(34) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(35) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(36) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(37) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(38) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(39) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(40) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(41) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(42) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(43) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(44) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(45) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(46) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(47) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(48) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(49) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(50) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(51) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(52) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(53) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(54) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(55) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(56) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(57) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(58) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(59) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(60) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(61) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(62) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(63) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(64) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(65) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(66) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(67) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(68) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(69) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(70) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(71) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(72) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(73) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(74) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(75) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(76) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(77) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(78) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(79) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(80) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(81) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(82) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(83) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(84) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(85) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(86) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(87) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(88) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(89) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(90) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(91) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(92) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(93) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(94) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(95) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(96) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(97) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(98) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(99) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(100) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(101) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(102) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(103) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(104) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(105) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(106) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(107) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(108) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(109) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(110) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(111) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(112) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(113) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(114) =3D -1 EBADF (Bad file descripto= r)=0A= [pid 5072] close(115) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(116) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(117) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(118) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(119) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(120) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(121) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(122) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(123) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(124) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(125) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(126) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(127) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(128) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(129) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(130) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(131) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(132) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(133) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(134) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(135) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(136) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(137) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(138) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(139) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(140) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(141) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(142) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(143) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(144) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(145) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(146) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(147) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(148) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(149) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(150) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(151) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(152) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(153) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(154) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(155) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(156) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(157) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(158) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(159) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(160) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(161) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(162) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(163) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(164) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(165) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(166) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(167) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(168) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(169) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(170) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(171) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(172) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(173) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(174) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(175) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(176) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(177) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(178) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(179) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(180) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(181) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(182) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(183) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(184) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(185) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(186) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(187) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(188) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(189) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(190) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(191) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(192) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(193) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(194) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(195) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(196) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(197) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(198) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(199) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(200) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(201) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(202) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(203) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(204) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(205) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(206) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(207) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(208) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(209) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(210) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(211) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(212) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(213) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(214) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(215) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(216) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(217) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(218) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(219) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(220) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(221) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(222) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(223) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(224) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(225) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(226) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(227) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(228) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(229) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(230) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(231) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(232) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(233) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(234) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(235) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(236) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(237) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(238) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(239) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(240) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(241) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(242) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(243) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(244) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(245) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(246) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(247) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(248) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(249) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(250) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(251) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(252) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(253) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(254) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] close(255) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] getpid() =3D 5072=0A= [pid 5072] rt_sigaction(SIGRT_0, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_1, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_2, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] execve("/bin/sh", ["/bin/sh", "-c", "python tryout.py"], = [/* 30 vars */]) =3D 0=0A= [pid 5072] brk(0) =3D 0x80a5420=0A= [pid 5072] open("/etc/ld.so.preload", O_RDONLY) =3D -1 ENOENT (No such = file or directory)=0A= [pid 5072] open("/usr/local/ace/ace/i686/mmx/libtermcap.so.2", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/usr/local/ace/ace/i686/mmx", 0xbffff550) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/usr/local/ace/ace/i686/libtermcap.so.2", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/usr/local/ace/ace/i686", 0xbffff550) =3D -1 ENOENT = (No such file or directory)=0A= [pid 5072] open("/usr/local/ace/ace/mmx/libtermcap.so.2", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/usr/local/ace/ace/mmx", 0xbffff550) =3D -1 ENOENT = (No such file or directory)=0A= [pid 5072] open("/usr/local/ace/ace/libtermcap.so.2", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] stat("/usr/local/ace/ace", {st_mode=3DS_IFDIR|0775, = st_size=3D19456, ...}) =3D 0=0A= [pid 5072] open("/etc/ld.so.cache", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) = =3D 0=0A= [pid 5072] mmap(0, 25676, PROT_READ, MAP_PRIVATE, 3, 0) =3D = 0x40013000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/lib/libtermcap.so.2", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D15001, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300\v\0"..., 4096) =3D = 4096=0A= [pid 5072] mmap(0, 13896, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4001a000=0A= [pid 5072] mprotect(0x4001d000, 1608, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x4001d000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x2000) =3D 0x4001d000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/usr/local/ace/ace/libc.so.6", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/lib/libc.so.6", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D4118299, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\250\202"..., 4096) =3D = 4096=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x4001e000=0A= [pid 5072] mmap(0, 993500, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4001f000=0A= [pid 5072] mprotect(0x4010a000, 30940, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x4010a000, 16384, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0xea000) =3D 0x4010a000=0A= [pid 5072] mmap(0x4010e000, 14556, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) =3D 0x4010e000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] mprotect(0x4001f000, 962560, PROT_READ|PROT_WRITE) =3D 0=0A= [pid 5072] mprotect(0x4001f000, 962560, PROT_READ|PROT_EXEC) =3D 0=0A= [pid 5072] munmap(0x40013000, 25676) =3D 0=0A= [pid 5072] personality(0 /* PER_??? */) =3D 0=0A= [pid 5072] getpid() =3D 5072=0A= [pid 5072] brk(0) =3D 0x80a5420=0A= [pid 5072] brk(0x80a55c0) =3D 0x80a55c0=0A= [pid 5072] brk(0x80a6000) =3D 0x80a6000=0A= [pid 5072] getuid() =3D 1002=0A= [pid 5072] getgid() =3D 100=0A= [pid 5072] geteuid() =3D 1002=0A= [pid 5072] getegid() =3D 100=0A= [pid 5072] time(NULL) =3D 983229974=0A= [pid 5072] rt_sigaction(SIGCHLD, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGCHLD, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGINT, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGINT, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGQUIT, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGQUIT, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGHUP, {0x804bb38, [HUP INT ILL TRAP ABRT BUS = FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGINT, {0x804bb38, [HUP INT ILL TRAP ABRT BUS = FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGILL, {0x804bb38, [HUP INT ILL TRAP ABRT BUS = FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGTRAP, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGABRT, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGFPE, {0x804bb38, [HUP INT ILL TRAP ABRT BUS = FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGBUS, {0x804bb38, [HUP INT ILL TRAP ABRT BUS = FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGSEGV, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGPIPE, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_IGN}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGPIPE, {SIG_IGN}, {0x804bb38, [HUP INT ILL = TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGALRM, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGTERM, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGXCPU, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGXFSZ, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGVTALRM, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGPROF, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGUSR1, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGUSR2, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigprocmask(SIG_BLOCK, NULL, [RT_0], 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGQUIT, {SIG_IGN}, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] socket(PF_UNIX, SOCK_STREAM, 0) =3D 3=0A= [pid 5072] connect(3, {sun_family=3DAF_UNIX, = sun_path=3D"/var/run/.nscd_socket"}, 110) =3D -1 ECONNREFUSED = (Connection refused)=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/etc/nsswitch.conf", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D1744, ...}) =3D = 0=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5072] read(3, "#\n# /etc/nsswitch.conf\n#\n# An ex"..., 4096) =3D = 1744=0A= [pid 5072] brk(0x80a7000) =3D 0x80a7000=0A= [pid 5072] read(3, "", 4096) =3D 0=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] munmap(0x40013000, 4096) =3D 0=0A= [pid 5072] open("/usr/local/ace/ace/libnss_files.so.2", O_RDONLY) =3D = -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/etc/ld.so.cache", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) = =3D 0=0A= [pid 5072] mmap(0, 25676, PROT_READ, MAP_PRIVATE, 3, 0) =3D = 0x40013000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/lib/libnss_files.so.2", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D247348, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\360\33"..., 4096) =3D = 4096=0A= [pid 5072] mmap(0, 35232, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x40112000=0A= [pid 5072] mprotect(0x4011a000, 2464, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x4011a000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x7000) =3D 0x4011a000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] munmap(0x40013000, 25676) =3D 0=0A= [pid 5072] open("/etc/passwd", O_RDONLY) =3D 3=0A= [pid 5072] fcntl(3, F_GETFD) =3D 0=0A= [pid 5072] fcntl(3, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D4890, ...}) =3D = 0=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5072] read(3, "root:DSVw9Br8/N7yc:0:0:root:/roo"..., 4096) =3D = 4096=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] munmap(0x40013000, 4096) =3D 0=0A= [pid 5072] uname({sys=3D"Linux", node=3D"akbar.nevex.com", ...}) =3D = 0=0A= [pid 5072] open("/usr/local/ace/ace/libnss_nisplus.so.2", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/etc/ld.so.cache", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) = =3D 0=0A= [pid 5072] mmap(0, 25676, PROT_READ, MAP_PRIVATE, 3, 0) =3D = 0x40013000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/lib/libnss_nisplus.so.2", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D253826, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\320\32"..., 4096) =3D = 4096=0A= [pid 5072] mmap(0, 40852, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4011b000=0A= [pid 5072] mprotect(0x40124000, 3988, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x40124000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x8000) =3D 0x40124000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/usr/local/ace/ace/libnsl.so.1", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/lib/libnsl.so.1", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D372604, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\2408\0"..., 4096) =3D = 4096=0A= [pid 5072] mmap(0, 86440, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x40125000=0A= [pid 5072] mprotect(0x40137000, 12712, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x40137000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x11000) =3D 0x40137000=0A= [pid 5072] mmap(0x40138000, 8616, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) =3D 0x40138000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] munmap(0x40013000, 25676) =3D 0=0A= [pid 5072] open("/usr/local/ace/ace/libnss_nis.so.2", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/etc/ld.so.cache", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) = =3D 0=0A= [pid 5072] mmap(0, 25676, PROT_READ, MAP_PRIVATE, 3, 0) =3D = 0x40013000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/lib/libnss_nis.so.2", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D254027, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\240\32"..., 4096) =3D = 4096=0A= [pid 5072] mmap(0, 36368, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4013b000=0A= [pid 5072] mprotect(0x40143000, 3600, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x40143000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x7000) =3D 0x40143000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] munmap(0x40013000, 25676) =3D 0=0A= [pid 5072] brk(0x80a8000) =3D 0x80a8000=0A= [pid 5072] brk(0x80aa000) =3D 0x80aa000=0A= [pid 5072] getcwd("/a/akbar/home/gvwilson/p2", 4095) =3D 26=0A= [pid 5072] getpid() =3D 5072=0A= [pid 5072] getppid() =3D 5071=0A= [pid 5072] getpgrp() =3D 5070=0A= [pid 5072] fcntl(-1, F_SETFD, FD_CLOEXEC) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5072] rt_sigaction(SIGCHLD, {0x806059c, [], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5072] brk(0x80ab000) =3D 0x80ab000=0A= [pid 5072] stat(".", {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) = =3D 0=0A= [pid 5072] stat("/home/gvwilson/bin/python", {st_mode=3DS_IFREG|0755, = st_size=3D1407749, ...}) =3D 0=0A= [pid 5072] brk(0x80ac000) =3D 0x80ac000=0A= [pid 5072] rt_sigaction(SIGHUP, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGILL, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGTRAP, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGABRT, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGFPE, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGBUS, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGSEGV, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGPIPE, {SIG_IGN}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGALRM, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGTERM, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGXCPU, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGXFSZ, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGVTALRM, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGPROF, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGUSR1, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGUSR2, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGINT, {SIG_DFL}, {0x804bb38, [HUP INT ILL = TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGQUIT, {SIG_DFL}, {SIG_IGN}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGCHLD, {SIG_DFL}, {0x806059c, [], = 0x4000000}, 8) =3D 0=0A= [pid 5072] execve("/home/gvwilson/bin/python", ["python", = "tryout.py"], [/* 29 vars */]) =3D 0=0A= [pid 5072] brk(0) =3D 0x80bf6dc=0A= [pid 5072] open("/etc/ld.so.preload", O_RDONLY) =3D -1 ENOENT (No such = file or directory)=0A= [pid 5072] open("/usr/local/ace/ace/i686/mmx/libpthread.so.0", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/usr/local/ace/ace/i686/mmx", 0xbffff550) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/usr/local/ace/ace/i686/libpthread.so.0", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/usr/local/ace/ace/i686", 0xbffff550) =3D -1 ENOENT = (No such file or directory)=0A= [pid 5072] open("/usr/local/ace/ace/mmx/libpthread.so.0", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/usr/local/ace/ace/mmx", 0xbffff550) =3D -1 ENOENT = (No such file or directory)=0A= [pid 5072] open("/usr/local/ace/ace/libpthread.so.0", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] stat("/usr/local/ace/ace", {st_mode=3DS_IFDIR|0775, = st_size=3D19456, ...}) =3D 0=0A= [pid 5072] open("/etc/ld.so.cache", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) = =3D 0=0A= [pid 5072] mmap(0, 25676, PROT_READ, MAP_PRIVATE, 3, 0) =3D = 0x40013000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/lib/libpthread.so.0", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D247381, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\3407\0"..., 4096) =3D = 4096=0A= [pid 5072] mmap(0, 69188, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4001a000=0A= [pid 5072] mprotect(0x40024000, 28228, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x40024000, 28672, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x9000) =3D 0x40024000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/usr/local/ace/ace/libdl.so.2", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/lib/libdl.so.2", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D74663, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0p\31\0\000"..., 4096) = =3D 4096=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x4002b000=0A= [pid 5072] mmap(0, 11532, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4002c000=0A= [pid 5072] mprotect(0x4002e000, 3340, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x4002e000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x1000) =3D 0x4002e000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/usr/local/ace/ace/libutil.so.1", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/lib/libutil.so.1", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D46504, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0P\f\0\000"..., 4096) = =3D 4096=0A= [pid 5072] mmap(0, 10104, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4002f000=0A= [pid 5072] mprotect(0x40031000, 1912, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x40031000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x1000) =3D 0x40031000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/usr/local/ace/ace/libm.so.6", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/lib/libm.so.6", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D540120, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\320=3D\0"..., 4096) = =3D 4096=0A= [pid 5072] mmap(0, 114648, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x40032000=0A= [pid 5072] mprotect(0x4004d000, 4056, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x4004d000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x1a000) =3D 0x4004d000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] open("/usr/local/ace/ace/libc.so.6", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/lib/libc.so.6", O_RDONLY) =3D 3=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D4118299, ...}) = =3D 0=0A= [pid 5072] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\250\202"..., 4096) =3D = 4096=0A= [pid 5072] mmap(0, 993500, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4004e000=0A= [pid 5072] mprotect(0x40139000, 30940, PROT_NONE) =3D 0=0A= [pid 5072] mmap(0x40139000, 16384, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0xea000) =3D 0x40139000=0A= [pid 5072] mmap(0x4013d000, 14556, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) =3D 0x4013d000=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] mprotect(0x4004e000, 962560, PROT_READ|PROT_WRITE) =3D 0=0A= [pid 5072] mprotect(0x4004e000, 962560, PROT_READ|PROT_EXEC) =3D 0=0A= [pid 5072] munmap(0x40013000, 25676) =3D 0=0A= [pid 5072] personality(0 /* PER_??? */) =3D 0=0A= [pid 5072] getpid() =3D 5072=0A= [pid 5072] getrlimit(RLIMIT_STACK, {rlim_cur=3D2040*1024, = rlim_max=3DRLIM_INFINITY}) =3D 0=0A= [pid 5072] getpid() =3D 5072=0A= [pid 5072] uname({sys=3D"Linux", node=3D"akbar.nevex.com", ...}) =3D = 0=0A= [pid 5072] rt_sigaction(SIGRT_0, {0x40020e10, [], 0x4000000}, NULL, 8) = =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_1, {0x400207ac, [], 0x4000000}, NULL, 8) = =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_2, {0x40020e9c, [], 0x4000000}, NULL, 8) = =3D 0=0A= [pid 5072] rt_sigprocmask(SIG_BLOCK, [RT_0], NULL, 8) =3D 0=0A= [pid 5072] brk(0) =3D 0x80bf6dc=0A= [pid 5072] brk(0x80bf70c) =3D 0x80bf70c=0A= [pid 5072] brk(0x80c0000) =3D 0x80c0000=0A= [pid 5072] open("tryout.py", O_RDONLY) =3D 3=0A= [pid 5072] ioctl(0, TCGETS, 0xbffffabc) =3D -1 EINVAL (Invalid = argument)=0A= [pid 5072] brk(0x80c1000) =3D 0x80c1000=0A= [pid 5072] brk(0x80c2000) =3D 0x80c2000=0A= [pid 5072] brk(0x80c3000) =3D 0x80c3000=0A= [pid 5072] brk(0x80c4000) =3D 0x80c4000=0A= [pid 5072] stat("/home/gvwilson/bin/python", {st_mode=3DS_IFREG|0755, = st_size=3D1407749, ...}) =3D 0=0A= [pid 5072] readlink("/home/gvwilson/bin/python", "python2.1", 1024) = =3D 9=0A= [pid 5072] readlink("/home/gvwilson/bin/python2.1", 0xbffff30c, 1024) = =3D -1 EINVAL (Invalid argument)=0A= [pid 5072] stat("/home/gvwilson/bin/Modules/Setup", 0xbffff1f4) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] stat("/home/gvwilson/bin/lib/python2.1/os.py", 0xbffff1d4) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/home/gvwilson/bin/lib/python2.1/os.pyc", 0xbffff1cc) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/home/gvwilson/lib/python2.1/os.py", = {st_mode=3DS_IFREG|0644, st_size=3D16300, ...}) =3D 0=0A= [pid 5072] stat("/home/gvwilson/bin/Modules/Setup", 0xbffff1f8) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] stat("/home/gvwilson/bin/lib/python2.1/lib-dynload", = 0xbffff1f0) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/home/gvwilson/lib/python2.1/lib-dynload", = {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= [pid 5072] brk(0x80c5000) =3D 0x80c5000=0A= [pid 5072] brk(0x80c6000) =3D 0x80c6000=0A= [pid 5072] brk(0x80c7000) =3D 0x80c7000=0A= [pid 5072] brk(0x80c8000) =3D 0x80c8000=0A= [pid 5072] brk(0x80c9000) =3D 0x80c9000=0A= [pid 5072] brk(0x80ca000) =3D 0x80ca000=0A= [pid 5072] rt_sigaction(SIGPIPE, {SIG_IGN}, {SIG_IGN}, 8) =3D 0=0A= [pid 5072] getpid() =3D 5072=0A= [pid 5072] brk(0x80cb000) =3D 0x80cb000=0A= [pid 5072] rt_sigaction(SIGHUP, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGINT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGQUIT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGILL, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGTRAP, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGABRT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGBUS, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGFPE, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGKILL, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGUSR1, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGSEGV, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGUSR2, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGPIPE, NULL, {SIG_IGN}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGALRM, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGTERM, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGSTKFLT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGCONT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGSTOP, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGTSTP, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGTTIN, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGTTOU, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGURG, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGXCPU, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGXFSZ, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGVTALRM, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGPROF, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGWINCH, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGIO, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGPWR, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGUNUSED, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_3, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_4, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_5, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_6, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_7, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_8, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_9, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_10, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_11, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_12, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_13, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_14, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_15, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_16, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_17, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_18, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_19, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_20, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_21, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_22, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_23, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_24, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_25, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_26, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_27, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_28, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_29, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_30, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGRT_31, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGINT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5072] rt_sigaction(SIGINT, {0x40021460, [], 0x4000000}, NULL, 8) = =3D 0=0A= [pid 5072] brk(0x80cd000) =3D 0x80cd000=0A= [pid 5072] stat("/home/gvwilson/lib/python2.1/site", 0xbfffec9c) =3D = -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/site.so", O_RDONLY) =3D = -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/sitemodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/site.py", O_RDONLY) =3D = 4=0A= [pid 5072] open("/dev/null", O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D -1 = ENOTDIR (Not a directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 5=0A= [pid 5072] fstat(5, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) = =3D 0=0A= [pid 5072] fcntl(5, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5072] getdents(5, /* 53 entries */, 3933) =3D 1168=0A= [pid 5072] getdents(5, /* 52 entries */, 3933) =3D 1156=0A= [pid 5072] getdents(5, /* 53 entries */, 3933) =3D 1172=0A= [pid 5072] close(5) =3D 0=0A= [pid 5072] fstat(4, {st_mode=3DS_IFREG|0644, st_size=3D8778, ...}) =3D = 0=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/site.pyc", O_RDONLY) =3D = 5=0A= [pid 5072] fstat(5, {st_mode=3DS_IFREG|0666, st_size=3D9529, ...}) =3D = 0=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5072] read(5, = "*\353\r\n\310\274\232:c\0\0\0\0\t\0\0\0s\343\4\0\0\177"..., 4096) =3D = 4096=0A= [pid 5072] fstat(5, {st_mode=3DS_IFREG|0666, st_size=3D9529, ...}) =3D = 0=0A= [pid 5072] read(5, "\203\1\0}\1\0Wn = \0\177a\0\4t\5\0i\10\0j\n\0o\16\0\1\1\1"..., 4096) =3D 4096=0A= [pid 5072] read(5, "\0LICENSE.txts\7\0\0\0LICENSEs\5\0\0\0asc"..., = 4096) =3D 1337=0A= [pid 5072] read(5, "", 4096) =3D 0=0A= [pid 5072] brk(0x80ce000) =3D 0x80ce000=0A= [pid 5072] brk(0x80cf000) =3D 0x80cf000=0A= [pid 5072] brk(0x80d0000) =3D 0x80d0000=0A= [pid 5072] close(5) =3D 0=0A= [pid 5072] munmap(0x40013000, 4096) =3D 0=0A= [pid 5072] stat("/home/gvwilson/lib/python2.1/os", 0xbfffde20) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/os.so", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/osmodule.so", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/os.py", O_RDONLY) =3D = 5=0A= [pid 5072] open("/home/gvwilson/lib/python2.1", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 6=0A= [pid 5072] fstat(6, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) = =3D 0=0A= [pid 5072] fcntl(6, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5072] brk(0x80d2000) =3D 0x80d2000=0A= [pid 5072] getdents(6, /* 53 entries */, 3933) =3D 1168=0A= [pid 5072] getdents(6, /* 52 entries */, 3933) =3D 1156=0A= [pid 5072] close(6) =3D 0=0A= [pid 5072] fstat(5, {st_mode=3DS_IFREG|0644, st_size=3D16300, ...}) = =3D 0=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/os.pyc", O_RDONLY) =3D = 6=0A= [pid 5072] fstat(6, {st_mode=3DS_IFREG|0666, st_size=3D21279, ...}) = =3D 0=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5072] read(6, = "*\353\r\n\307\274\232:c\0\0\0\0\v\0\0\0s\336\10\0\0\177"..., 4096) =3D = 4096=0A= [pid 5072] fstat(6, {st_mode=3DS_IFREG|0666, st_size=3D21279, ...}) = =3D 0=0A= [pid 5072] brk(0x80d8000) =3D 0x80d8000=0A= [pid 5072] read(6, = "\0|\2\0\203\1\0\\\2\0}\2\0}\3\0n\1\0\1\177\257\0|\2\0o"..., 16384) =3D = 16384=0A= [pid 5072] read(6, "\0\0_spawnvefs\4\0\0\0paths\6\0\0\0spawnls"..., = 4096) =3D 799=0A= [pid 5072] read(6, "", 4096) =3D 0=0A= [pid 5072] brk(0x80d9000) =3D 0x80d9000=0A= [pid 5072] brk(0x80da000) =3D 0x80da000=0A= [pid 5072] brk(0x80db000) =3D 0x80db000=0A= [pid 5072] brk(0x80dc000) =3D 0x80dc000=0A= [pid 5072] brk(0x80dd000) =3D 0x80dd000=0A= [pid 5072] brk(0x80de000) =3D 0x80de000=0A= [pid 5072] brk(0x80e2000) =3D 0x80e2000=0A= [pid 5072] close(6) =3D 0=0A= [pid 5072] munmap(0x40013000, 4096) =3D 0=0A= [pid 5072] brk(0x80e3000) =3D 0x80e3000=0A= [pid 5072] stat("/home/gvwilson/lib/python2.1/posixpath", 0xbfffcfa4) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/posixpath.so", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/posixpathmodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/posixpath.py", O_RDONLY) = =3D 6=0A= [pid 5072] open("/home/gvwilson/lib/python2.1", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 7=0A= [pid 5072] fstat(7, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) = =3D 0=0A= [pid 5072] fcntl(7, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5072] brk(0x80e5000) =3D 0x80e5000=0A= [pid 5072] getdents(7, /* 53 entries */, 3933) =3D 1168=0A= [pid 5072] getdents(7, /* 52 entries */, 3933) =3D 1156=0A= [pid 5072] close(7) =3D 0=0A= [pid 5072] fstat(6, {st_mode=3DS_IFREG|0644, st_size=3D11111, ...}) = =3D 0=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/posixpath.pyc", = O_RDONLY) =3D 7=0A= [pid 5072] fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D12385, ...}) = =3D 0=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5072] read(7, = "*\353\r\n\307\274\232:c\0\0\0\0\31\0\0\0s\261\1\0\0\177"..., 4096) =3D = 4096=0A= [pid 5072] fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D12385, ...}) = =3D 0=0A= [pid 5072] read(7, = "ponents\0\0\0\0i\0\0\0\0i\1\0\0\0N(\6\0\0\0s\1\0\0\0"..., 8192) =3D = 8192=0A= [pid 5072] read(7, "lib/python2.1/posixpath.pys\1\0\0\0?"..., 4096) = =3D 97=0A= [pid 5072] read(7, "", 4096) =3D 0=0A= [pid 5072] brk(0x80e6000) =3D 0x80e6000=0A= [pid 5072] brk(0x80e7000) =3D 0x80e7000=0A= [pid 5072] brk(0x80ee000) =3D 0x80ee000=0A= [pid 5072] close(7) =3D 0=0A= [pid 5072] munmap(0x40013000, 4096) =3D 0=0A= [pid 5072] stat("/home/gvwilson/lib/python2.1/stat", 0xbfffc128) =3D = -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/stat.so", O_RDONLY) =3D = -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/statmodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/stat.py", O_RDONLY) =3D = 7=0A= [pid 5072] open("/home/gvwilson/lib/python2.1", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 8=0A= [pid 5072] fstat(8, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) = =3D 0=0A= [pid 5072] fcntl(8, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5072] getdents(8, /* 53 entries */, 3933) =3D 1168=0A= [pid 5072] getdents(8, /* 52 entries */, 3933) =3D 1156=0A= [pid 5072] getdents(8, /* 53 entries */, 3933) =3D 1172=0A= [pid 5072] close(8) =3D 0=0A= [pid 5072] fstat(7, {st_mode=3DS_IFREG|0644, st_size=3D1667, ...}) =3D = 0=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/stat.pyc", O_RDONLY) =3D = 8=0A= [pid 5072] fstat(8, {st_mode=3DS_IFREG|0666, st_size=3D3460, ...}) =3D = 0=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5072] read(8, = "*\353\r\n\311\274\232:c\0\0\0\0\1\0\0\0s\300\1\0\0\177"..., 4096) =3D = 3460=0A= [pid 5072] fstat(8, {st_mode=3DS_IFREG|0666, st_size=3D3460, ...}) =3D = 0=0A= [pid 5072] read(8, "", 4096) =3D 0=0A= [pid 5072] close(8) =3D 0=0A= [pid 5072] munmap(0x40013000, 4096) =3D 0=0A= [pid 5072] close(7) =3D 0=0A= [pid 5072] close(6) =3D 0=0A= [pid 5072] stat("/home/gvwilson/lib/python2.1/UserDict", 0xbfffcfa4) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/UserDict.so", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/UserDictmodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/UserDict.py", O_RDONLY) = =3D 6=0A= [pid 5072] open("/home/gvwilson/lib/python2.1", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 7=0A= [pid 5072] fstat(7, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) = =3D 0=0A= [pid 5072] fcntl(7, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5072] getdents(7, /* 53 entries */, 3933) =3D 1168=0A= [pid 5072] close(7) =3D 0=0A= [pid 5072] fstat(6, {st_mode=3DS_IFREG|0644, st_size=3D1573, ...}) =3D = 0=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/UserDict.pyc", O_RDONLY) = =3D 7=0A= [pid 5072] fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D4341, ...}) =3D = 0=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5072] read(7, = "*\353\r\n\302\274\232:c\0\0\0\0\3\0\0\0s&\0\0\0\177\0\0"..., 4096) =3D = 4096=0A= [pid 5072] fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D4341, ...}) =3D = 0=0A= [pid 5072] read(7, = "s\7\0\0\0popitem(\0\0\0\0(\0\0\0\0(\0\0\0\0s(\0\0\0"..., 4096) =3D = 245=0A= [pid 5072] read(7, "", 4096) =3D 0=0A= [pid 5072] brk(0x80ef000) =3D 0x80ef000=0A= [pid 5072] close(7) =3D 0=0A= [pid 5072] munmap(0x40013000, 4096) =3D 0=0A= [pid 5072] close(6) =3D 0=0A= [pid 5072] brk(0x80f0000) =3D 0x80f0000=0A= [pid 5072] brk(0x80f1000) =3D 0x80f1000=0A= [pid 5072] brk(0x80f2000) =3D 0x80f2000=0A= [pid 5072] brk(0x80f3000) =3D 0x80f3000=0A= [pid 5072] brk(0x80f4000) =3D 0x80f4000=0A= [pid 5072] brk(0x80f5000) =3D 0x80f5000=0A= [pid 5072] brk(0x80f6000) =3D 0x80f6000=0A= [pid 5072] brk(0x80f7000) =3D 0x80f7000=0A= [pid 5072] brk(0x80f8000) =3D 0x80f8000=0A= [pid 5072] brk(0x80f9000) =3D 0x80f9000=0A= [pid 5072] brk(0x80fa000) =3D 0x80fa000=0A= [pid 5072] brk(0x80fb000) =3D 0x80fb000=0A= [pid 5072] brk(0x80fc000) =3D 0x80fc000=0A= [pid 5072] brk(0x80fe000) =3D 0x80fe000=0A= [pid 5072] close(5) =3D 0=0A= [pid 5072] stat("/home/gvwilson/lib/python2.1/site-packages", = {st_mode=3DS_IFDIR|0755, st_size=3D1024, ...}) =3D 0=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/site-packages", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 5=0A= [pid 5072] fstat(5, {st_mode=3DS_IFDIR|0755, st_size=3D1024, ...}) =3D = 0=0A= [pid 5072] fcntl(5, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5072] getdents(5, /* 4 entries */, 3933) =3D 68=0A= [pid 5072] getdents(5, /* 0 entries */, 3933) =3D 0=0A= [pid 5072] close(5) =3D 0=0A= [pid 5072] stat("/home/gvwilson/lib/site-python", 0xbfffe9ec) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/usr/share/locale/locale.alias", O_RDONLY) =3D 5=0A= [pid 5072] fstat(5, {st_mode=3DS_IFREG|0644, st_size=3D2174, ...}) =3D = 0=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5072] read(5, "# Locale name alias data base.\n#"..., 4096) =3D = 2174=0A= [pid 5072] read(5, "", 4096) =3D 0=0A= [pid 5072] close(5) =3D 0=0A= [pid 5072] munmap(0x40013000, 4096) =3D 0=0A= [pid 5072] open("/usr/share/i18n/locale.alias", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5072] open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/home/gvwilson/lib/python2.1/sitecustomize", = 0xbfffde20) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/sitecustomize.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/sitecustomizemodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/sitecustomize.py", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] open("/home/gvwilson/lib/python2.1/sitecustomize.pyc", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = stat("/home/gvwilson/lib/python2.1/lib-dynload/sitecustomize", = 0xbfffde20) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/lib-dynload/sitecustomize.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/lib-dynload/sitecustomizemodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/lib-dynload/sitecustomize.py", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/lib-dynload/sitecustomize.pyc", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = stat("/home/gvwilson/lib/python2.1/plat-linux2/sitecustomize", = 0xbfffde20) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/plat-linux2/sitecustomize.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/plat-linux2/sitecustomizemodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/plat-linux2/sitecustomize.py", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/plat-linux2/sitecustomize.pyc", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] stat("/home/gvwilson/lib/python2.1/lib-tk/sitecustomize", = 0xbfffde20) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/lib-tk/sitecustomize.so", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/lib-tk/sitecustomizemodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/lib-tk/sitecustomize.py", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/lib-tk/sitecustomize.pyc", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = stat("/home/gvwilson/lib/python2.1/site-packages/sitecustomize", = 0xbfffde20) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/site-packages/sitecustomize.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/site-packages/sitecustomizemodule.so"= , O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/site-packages/sitecustomize.py", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] = open("/home/gvwilson/lib/python2.1/site-packages/sitecustomize.pyc", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5072] close(4) =3D 0=0A= [pid 5072] readlink("tryout.py", 0xbffff748, 1024) =3D -1 EINVAL = (Invalid argument)=0A= [pid 5072] ioctl(3, TCGETS, 0xbffffa9c) =3D -1 ENOTTY (Inappropriate = ioctl for device)=0A= [pid 5072] fstat(3, {st_mode=3DS_IFREG|0666, st_size=3D20, ...}) =3D = 0=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5072] _llseek(3, 0, [0], SEEK_CUR) =3D 0=0A= [pid 5072] read(3, "print \"We made it!\"\n", 4096) =3D 20=0A= [pid 5072] _llseek(3, 20, [20], SEEK_SET) =3D 0=0A= [pid 5072] brk(0x8101000) =3D 0x8101000=0A= [pid 5072] read(3, "", 4096) =3D 0=0A= [pid 5072] close(3) =3D 0=0A= [pid 5072] munmap(0x40013000, 4096) =3D 0=0A= [pid 5072] fstat(1, {st_mode=3DS_IFIFO|0600, st_size=3D0, ...}) =3D = 0=0A= [pid 5072] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5072] rt_sigaction(SIGINT, NULL, {0x40021460, [], 0x4000000}, 8) = =3D 0=0A= [pid 5072] rt_sigaction(SIGINT, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5072] write(1, "We made it!\n", 12) =3D 12=0A= [pid 5071] <... read resumed> "We made it!\n", 8192) =3D 12=0A= [pid 5071] read(5, =0A= [pid 5072] munmap(0x40013000, 4096) =3D 0=0A= [pid 5072] _exit(0) =3D ?=0A= <... read resumed> "", 4096) =3D 0=0A= --- SIGCHLD (Child exited) ---=0A= read(5, "", 8192) =3D 0=0A= wait4(5072, [WIFEXITED(s) && WEXITSTATUS(s) =3D=3D 0], WNOHANG, NULL) = =3D 5072=0A= pipe([3, 4]) =3D 0=0A= pipe([6, 7]) =3D 0=0A= fork() =3D 5073=0A= [pid 5071] close(3) =3D 0=0A= [pid 5071] fcntl(4, F_GETFL) =3D 0x1 (flags O_WRONLY)=0A= [pid 5071] fstat(4, {st_mode=3DS_IFIFO|0600, st_size=3D0, ...}) =3D = 0=0A= [pid 5071] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40142000=0A= [pid 5071] _llseek(4, 0, 0xbffff338, SEEK_CUR) =3D -1 ESPIPE (Illegal = seek)=0A= [pid 5071] close(7) =3D 0=0A= [pid 5071] fcntl(6, F_GETFL) =3D 0 (flags O_RDONLY)=0A= [pid 5071] fstat(6, {st_mode=3DS_IFIFO|0600, st_size=3D0, ...}) =3D = 0=0A= [pid 5071] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40143000=0A= [pid 5071] _llseek(6, 0, 0xbffff338, SEEK_CUR) =3D -1 ESPIPE (Illegal = seek)=0A= [pid 5071] close(4 =0A= [pid 5073] getpid() =3D 5073=0A= [pid 5073] getpid() =3D 5073=0A= [pid 5073] dup2(3, 0) =3D 0=0A= [pid 5073] dup2(7, 1) =3D 1=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] close(4) =3D 0=0A= [pid 5073] close(5) =3D 0=0A= [pid 5073] close(6) =3D 0=0A= [pid 5073] close(7) =3D 0=0A= [pid 5073] close(8) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(9) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(10) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(11) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(12) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(13) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(14) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(15) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(16) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(17) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(18) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(19) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(20) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(21) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(22) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(23) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(24) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(25) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(26) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(27) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(28) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(29) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(30) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(31) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(32) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(33) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(34) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(35) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(36) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(37) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(38) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(39) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(40) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(41) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(42) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(43) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(44) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(45) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(46) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(47) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(48) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(49) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(50) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(51) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(52) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(53) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(54) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(55) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(56) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(57) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(58) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(59) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(60) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(61) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(62) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(63) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(64) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(65) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(66) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(67) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(68) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(69) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(70) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(71) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(72) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(73) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(74) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(75) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(76) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(77) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(78) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(79) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(80) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(81) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(82) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(83) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(84) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(85) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(86) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(87) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(88) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(89) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(90) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(91) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(92) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(93) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(94) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(95) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(96) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(97) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(98) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(99) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(100) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(101) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(102) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(103) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(104) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(105) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(106) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(107) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(108) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(109) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(110) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(111) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(112) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(113) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(114) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(115) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(116) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(117) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(118) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(119) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(120) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(121) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(122) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(123) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(124) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(125) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(126) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(127) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(128) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(129) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(130) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(131) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(132) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(133) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(134) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(135) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(136) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(137) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(138) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(139) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(140) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(141) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(142) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(143) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(144) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(145) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(146) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(147) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(148) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(149) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(150) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(151) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(152) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(153) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(154) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(155) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(156) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(157) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(158) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(159) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(160) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(161) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(162) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(163) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(164) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(165) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(166) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(167) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(168) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(169) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(170) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(171) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(172) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(173) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(174) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(175) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(176) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(177) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(178) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(179) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(180) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(181) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(182) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(183) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(184) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(185) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(186) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(187) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(188) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(189) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(190) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(191) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(192) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(193) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(194) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(195) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(196) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(197) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(198) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(199) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(200) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(201) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(202) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(203) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(204) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(205) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(206) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(207) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(208) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(209) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(210) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(211) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(212) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(213) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(214) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(215) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(216) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(217) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(218) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(219) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(220) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(221) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(222) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(223) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(224) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(225) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(226) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(227) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(228) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(229) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(230) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(231) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(232) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(233) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(234) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(235) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(236) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(237) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(238) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(239) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(240) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(241) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(242) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(243) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(244) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(245) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(246) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(247) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(248) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(249) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(250) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(251) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(252) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(253) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(254) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] close(255) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] getpid() =3D 5073=0A= [pid 5073] rt_sigaction(SIGRT_0, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_1, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_2, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] execve("/bin/sh", ["/bin/sh", "-c", "python tryout.py"], = [/* 30 vars */]) =3D 0=0A= [pid 5071] <... close resumed> ) =3D 0=0A= [pid 5071] munmap(0x40142000, 4096) =3D 0=0A= [pid 5071] read(6, =0A= [pid 5073] brk(0) =3D 0x80a5420=0A= [pid 5073] open("/etc/ld.so.preload", O_RDONLY) =3D -1 ENOENT (No such = file or directory)=0A= [pid 5073] open("/usr/local/ace/ace/i686/mmx/libtermcap.so.2", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/usr/local/ace/ace/i686/mmx", 0xbffff550) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/usr/local/ace/ace/i686/libtermcap.so.2", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/usr/local/ace/ace/i686", 0xbffff550) =3D -1 ENOENT = (No such file or directory)=0A= [pid 5073] open("/usr/local/ace/ace/mmx/libtermcap.so.2", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/usr/local/ace/ace/mmx", 0xbffff550) =3D -1 ENOENT = (No such file or directory)=0A= [pid 5073] open("/usr/local/ace/ace/libtermcap.so.2", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] stat("/usr/local/ace/ace", {st_mode=3DS_IFDIR|0775, = st_size=3D19456, ...}) =3D 0=0A= [pid 5073] open("/etc/ld.so.cache", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) = =3D 0=0A= [pid 5073] mmap(0, 25676, PROT_READ, MAP_PRIVATE, 3, 0) =3D = 0x40013000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/lib/libtermcap.so.2", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D15001, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300\v\0"..., 4096) =3D = 4096=0A= [pid 5073] mmap(0, 13896, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4001a000=0A= [pid 5073] mprotect(0x4001d000, 1608, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x4001d000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x2000) =3D 0x4001d000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/usr/local/ace/ace/libc.so.6", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/lib/libc.so.6", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D4118299, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\250\202"..., 4096) =3D = 4096=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x4001e000=0A= [pid 5073] mmap(0, 993500, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4001f000=0A= [pid 5073] mprotect(0x4010a000, 30940, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x4010a000, 16384, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0xea000) =3D 0x4010a000=0A= [pid 5073] mmap(0x4010e000, 14556, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) =3D 0x4010e000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] mprotect(0x4001f000, 962560, PROT_READ|PROT_WRITE) =3D 0=0A= [pid 5073] mprotect(0x4001f000, 962560, PROT_READ|PROT_EXEC) =3D 0=0A= [pid 5073] munmap(0x40013000, 25676) =3D 0=0A= [pid 5073] personality(0 /* PER_??? */) =3D 0=0A= [pid 5073] getpid() =3D 5073=0A= [pid 5073] brk(0) =3D 0x80a5420=0A= [pid 5073] brk(0x80a55c0) =3D 0x80a55c0=0A= [pid 5073] brk(0x80a6000) =3D 0x80a6000=0A= [pid 5073] getuid() =3D 1002=0A= [pid 5073] getgid() =3D 100=0A= [pid 5073] geteuid() =3D 1002=0A= [pid 5073] getegid() =3D 100=0A= [pid 5073] time(NULL) =3D 983229974=0A= [pid 5073] rt_sigaction(SIGCHLD, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGCHLD, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGINT, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGINT, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGQUIT, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGQUIT, {SIG_DFL}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGHUP, {0x804bb38, [HUP INT ILL TRAP ABRT BUS = FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGINT, {0x804bb38, [HUP INT ILL TRAP ABRT BUS = FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGILL, {0x804bb38, [HUP INT ILL TRAP ABRT BUS = FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGTRAP, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGABRT, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGFPE, {0x804bb38, [HUP INT ILL TRAP ABRT BUS = FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGBUS, {0x804bb38, [HUP INT ILL TRAP ABRT BUS = FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGSEGV, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGPIPE, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_IGN}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGPIPE, {SIG_IGN}, {0x804bb38, [HUP INT ILL = TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGALRM, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGTERM, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGXCPU, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGXFSZ, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGVTALRM, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGPROF, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGUSR1, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGUSR2, {0x804bb38, [HUP INT ILL TRAP ABRT = BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigprocmask(SIG_BLOCK, NULL, [RT_0], 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGQUIT, {SIG_IGN}, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] socket(PF_UNIX, SOCK_STREAM, 0) =3D 3=0A= [pid 5073] connect(3, {sun_family=3DAF_UNIX, = sun_path=3D"/var/run/.nscd_socket"}, 110) =3D -1 ECONNREFUSED = (Connection refused)=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/etc/nsswitch.conf", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D1744, ...}) =3D = 0=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5073] read(3, "#\n# /etc/nsswitch.conf\n#\n# An ex"..., 4096) =3D = 1744=0A= [pid 5073] brk(0x80a7000) =3D 0x80a7000=0A= [pid 5073] read(3, "", 4096) =3D 0=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] munmap(0x40013000, 4096) =3D 0=0A= [pid 5073] open("/usr/local/ace/ace/libnss_files.so.2", O_RDONLY) =3D = -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/etc/ld.so.cache", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) = =3D 0=0A= [pid 5073] mmap(0, 25676, PROT_READ, MAP_PRIVATE, 3, 0) =3D = 0x40013000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/lib/libnss_files.so.2", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D247348, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\360\33"..., 4096) =3D = 4096=0A= [pid 5073] mmap(0, 35232, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x40112000=0A= [pid 5073] mprotect(0x4011a000, 2464, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x4011a000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x7000) =3D 0x4011a000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] munmap(0x40013000, 25676) =3D 0=0A= [pid 5073] open("/etc/passwd", O_RDONLY) =3D 3=0A= [pid 5073] fcntl(3, F_GETFD) =3D 0=0A= [pid 5073] fcntl(3, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D4890, ...}) =3D = 0=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5073] read(3, "root:DSVw9Br8/N7yc:0:0:root:/roo"..., 4096) =3D = 4096=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] munmap(0x40013000, 4096) =3D 0=0A= [pid 5073] uname({sys=3D"Linux", node=3D"akbar.nevex.com", ...}) =3D = 0=0A= [pid 5073] open("/usr/local/ace/ace/libnss_nisplus.so.2", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/etc/ld.so.cache", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) = =3D 0=0A= [pid 5073] mmap(0, 25676, PROT_READ, MAP_PRIVATE, 3, 0) =3D = 0x40013000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/lib/libnss_nisplus.so.2", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D253826, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\320\32"..., 4096) =3D = 4096=0A= [pid 5073] mmap(0, 40852, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4011b000=0A= [pid 5073] mprotect(0x40124000, 3988, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x40124000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x8000) =3D 0x40124000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/usr/local/ace/ace/libnsl.so.1", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/lib/libnsl.so.1", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D372604, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\2408\0"..., 4096) =3D = 4096=0A= [pid 5073] mmap(0, 86440, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x40125000=0A= [pid 5073] mprotect(0x40137000, 12712, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x40137000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x11000) =3D 0x40137000=0A= [pid 5073] mmap(0x40138000, 8616, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) =3D 0x40138000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] munmap(0x40013000, 25676) =3D 0=0A= [pid 5073] open("/usr/local/ace/ace/libnss_nis.so.2", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/etc/ld.so.cache", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) = =3D 0=0A= [pid 5073] mmap(0, 25676, PROT_READ, MAP_PRIVATE, 3, 0) =3D = 0x40013000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/lib/libnss_nis.so.2", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D254027, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\240\32"..., 4096) =3D = 4096=0A= [pid 5073] mmap(0, 36368, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4013b000=0A= [pid 5073] mprotect(0x40143000, 3600, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x40143000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x7000) =3D 0x40143000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] munmap(0x40013000, 25676) =3D 0=0A= [pid 5073] brk(0x80a8000) =3D 0x80a8000=0A= [pid 5073] brk(0x80aa000) =3D 0x80aa000=0A= [pid 5073] getcwd("/a/akbar/home/gvwilson/p2", 4095) =3D 26=0A= [pid 5073] getpid() =3D 5073=0A= [pid 5073] getppid() =3D 5071=0A= [pid 5073] getpgrp() =3D 5070=0A= [pid 5073] fcntl(-1, F_SETFD, FD_CLOEXEC) =3D -1 EBADF (Bad file = descriptor)=0A= [pid 5073] rt_sigaction(SIGCHLD, {0x806059c, [], 0x4000000}, = {SIG_DFL}, 8) =3D 0=0A= [pid 5073] brk(0x80ab000) =3D 0x80ab000=0A= [pid 5073] stat(".", {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) = =3D 0=0A= [pid 5073] stat("/home/gvwilson/bin/python", {st_mode=3DS_IFREG|0755, = st_size=3D1407749, ...}) =3D 0=0A= [pid 5073] brk(0x80ac000) =3D 0x80ac000=0A= [pid 5073] rt_sigaction(SIGHUP, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGILL, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGTRAP, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGABRT, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGFPE, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGBUS, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGSEGV, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGPIPE, {SIG_IGN}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGALRM, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGTERM, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGXCPU, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGXFSZ, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGVTALRM, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGPROF, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGUSR1, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGUSR2, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGINT, {SIG_DFL}, {0x804bb38, [HUP INT ILL = TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM PROF], = 0x4000000}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGQUIT, {SIG_DFL}, {SIG_IGN}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGCHLD, {SIG_DFL}, {0x806059c, [], = 0x4000000}, 8) =3D 0=0A= [pid 5073] execve("/home/gvwilson/bin/python", ["python", = "tryout.py"], [/* 29 vars */]) =3D 0=0A= [pid 5073] brk(0) =3D 0x80bf6dc=0A= [pid 5073] open("/etc/ld.so.preload", O_RDONLY) =3D -1 ENOENT (No such = file or directory)=0A= [pid 5073] open("/usr/local/ace/ace/i686/mmx/libpthread.so.0", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/usr/local/ace/ace/i686/mmx", 0xbffff550) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/usr/local/ace/ace/i686/libpthread.so.0", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/usr/local/ace/ace/i686", 0xbffff550) =3D -1 ENOENT = (No such file or directory)=0A= [pid 5073] open("/usr/local/ace/ace/mmx/libpthread.so.0", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/usr/local/ace/ace/mmx", 0xbffff550) =3D -1 ENOENT = (No such file or directory)=0A= [pid 5073] open("/usr/local/ace/ace/libpthread.so.0", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] stat("/usr/local/ace/ace", {st_mode=3DS_IFDIR|0775, = st_size=3D19456, ...}) =3D 0=0A= [pid 5073] open("/etc/ld.so.cache", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) = =3D 0=0A= [pid 5073] mmap(0, 25676, PROT_READ, MAP_PRIVATE, 3, 0) =3D = 0x40013000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/lib/libpthread.so.0", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D247381, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\3407\0"..., 4096) =3D = 4096=0A= [pid 5073] mmap(0, 69188, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4001a000=0A= [pid 5073] mprotect(0x40024000, 28228, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x40024000, 28672, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x9000) =3D 0x40024000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/usr/local/ace/ace/libdl.so.2", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/lib/libdl.so.2", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D74663, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0p\31\0\000"..., 4096) = =3D 4096=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x4002b000=0A= [pid 5073] mmap(0, 11532, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4002c000=0A= [pid 5073] mprotect(0x4002e000, 3340, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x4002e000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x1000) =3D 0x4002e000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/usr/local/ace/ace/libutil.so.1", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/lib/libutil.so.1", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D46504, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0P\f\0\000"..., 4096) = =3D 4096=0A= [pid 5073] mmap(0, 10104, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4002f000=0A= [pid 5073] mprotect(0x40031000, 1912, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x40031000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x1000) =3D 0x40031000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/usr/local/ace/ace/libm.so.6", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/lib/libm.so.6", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D540120, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\320=3D\0"..., 4096) = =3D 4096=0A= [pid 5073] mmap(0, 114648, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x40032000=0A= [pid 5073] mprotect(0x4004d000, 4056, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x4004d000, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0x1a000) =3D 0x4004d000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] open("/usr/local/ace/ace/libc.so.6", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/lib/libc.so.6", O_RDONLY) =3D 3=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0755, st_size=3D4118299, ...}) = =3D 0=0A= [pid 5073] read(3, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\250\202"..., 4096) =3D = 4096=0A= [pid 5073] mmap(0, 993500, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =3D = 0x4004e000=0A= [pid 5073] mprotect(0x40139000, 30940, PROT_NONE) =3D 0=0A= [pid 5073] mmap(0x40139000, 16384, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED, 3, 0xea000) =3D 0x40139000=0A= [pid 5073] mmap(0x4013d000, 14556, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) =3D 0x4013d000=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] mprotect(0x4004e000, 962560, PROT_READ|PROT_WRITE) =3D 0=0A= [pid 5073] mprotect(0x4004e000, 962560, PROT_READ|PROT_EXEC) =3D 0=0A= [pid 5073] munmap(0x40013000, 25676) =3D 0=0A= [pid 5073] personality(0 /* PER_??? */) =3D 0=0A= [pid 5073] getpid() =3D 5073=0A= [pid 5073] getrlimit(RLIMIT_STACK, {rlim_cur=3D2040*1024, = rlim_max=3DRLIM_INFINITY}) =3D 0=0A= [pid 5073] getpid() =3D 5073=0A= [pid 5073] uname({sys=3D"Linux", node=3D"akbar.nevex.com", ...}) =3D = 0=0A= [pid 5073] rt_sigaction(SIGRT_0, {0x40020e10, [], 0x4000000}, NULL, 8) = =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_1, {0x400207ac, [], 0x4000000}, NULL, 8) = =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_2, {0x40020e9c, [], 0x4000000}, NULL, 8) = =3D 0=0A= [pid 5073] rt_sigprocmask(SIG_BLOCK, [RT_0], NULL, 8) =3D 0=0A= [pid 5073] brk(0) =3D 0x80bf6dc=0A= [pid 5073] brk(0x80bf70c) =3D 0x80bf70c=0A= [pid 5073] brk(0x80c0000) =3D 0x80c0000=0A= [pid 5073] open("tryout.py", O_RDONLY) =3D 3=0A= [pid 5073] ioctl(0, TCGETS, 0xbffffabc) =3D -1 EINVAL (Invalid = argument)=0A= [pid 5073] brk(0x80c1000) =3D 0x80c1000=0A= [pid 5073] brk(0x80c2000) =3D 0x80c2000=0A= [pid 5073] brk(0x80c3000) =3D 0x80c3000=0A= [pid 5073] brk(0x80c4000) =3D 0x80c4000=0A= [pid 5073] stat("/home/gvwilson/bin/python", {st_mode=3DS_IFREG|0755, = st_size=3D1407749, ...}) =3D 0=0A= [pid 5073] readlink("/home/gvwilson/bin/python", "python2.1", 1024) = =3D 9=0A= [pid 5073] readlink("/home/gvwilson/bin/python2.1", 0xbffff30c, 1024) = =3D -1 EINVAL (Invalid argument)=0A= [pid 5073] stat("/home/gvwilson/bin/Modules/Setup", 0xbffff1f4) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] stat("/home/gvwilson/bin/lib/python2.1/os.py", 0xbffff1d4) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/home/gvwilson/bin/lib/python2.1/os.pyc", 0xbffff1cc) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/home/gvwilson/lib/python2.1/os.py", = {st_mode=3DS_IFREG|0644, st_size=3D16300, ...}) =3D 0=0A= [pid 5073] stat("/home/gvwilson/bin/Modules/Setup", 0xbffff1f8) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] stat("/home/gvwilson/bin/lib/python2.1/lib-dynload", = 0xbffff1f0) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/home/gvwilson/lib/python2.1/lib-dynload", = {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= [pid 5073] brk(0x80c5000) =3D 0x80c5000=0A= [pid 5073] brk(0x80c6000) =3D 0x80c6000=0A= [pid 5073] brk(0x80c7000) =3D 0x80c7000=0A= [pid 5073] brk(0x80c8000) =3D 0x80c8000=0A= [pid 5073] brk(0x80c9000) =3D 0x80c9000=0A= [pid 5073] brk(0x80ca000) =3D 0x80ca000=0A= [pid 5073] rt_sigaction(SIGPIPE, {SIG_IGN}, {SIG_IGN}, 8) =3D 0=0A= [pid 5073] getpid() =3D 5073=0A= [pid 5073] brk(0x80cb000) =3D 0x80cb000=0A= [pid 5073] rt_sigaction(SIGHUP, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGINT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGQUIT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGILL, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGTRAP, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGABRT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGBUS, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGFPE, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGKILL, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGUSR1, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGSEGV, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGUSR2, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGPIPE, NULL, {SIG_IGN}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGALRM, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGTERM, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGSTKFLT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGCONT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGSTOP, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGTSTP, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGTTIN, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGTTOU, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGURG, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGXCPU, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGXFSZ, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGVTALRM, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGPROF, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGWINCH, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGIO, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGPWR, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGUNUSED, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_3, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_4, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_5, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_6, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_7, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_8, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_9, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_10, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_11, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_12, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_13, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_14, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_15, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_16, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_17, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_18, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_19, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_20, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_21, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_22, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_23, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_24, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_25, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_26, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_27, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_28, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_29, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_30, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGRT_31, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGINT, NULL, {SIG_DFL}, 8) =3D 0=0A= [pid 5073] rt_sigaction(SIGINT, {0x40021460, [], 0x4000000}, NULL, 8) = =3D 0=0A= [pid 5073] brk(0x80cd000) =3D 0x80cd000=0A= [pid 5073] stat("/home/gvwilson/lib/python2.1/site", 0xbfffec9c) =3D = -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/site.so", O_RDONLY) =3D = -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/sitemodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/site.py", O_RDONLY) =3D = 4=0A= [pid 5073] open("/dev/null", O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D -1 = ENOTDIR (Not a directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 5=0A= [pid 5073] fstat(5, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) = =3D 0=0A= [pid 5073] fcntl(5, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5073] getdents(5, /* 53 entries */, 3933) =3D 1168=0A= [pid 5073] getdents(5, /* 52 entries */, 3933) =3D 1156=0A= [pid 5073] getdents(5, /* 53 entries */, 3933) =3D 1172=0A= [pid 5073] close(5) =3D 0=0A= [pid 5073] fstat(4, {st_mode=3DS_IFREG|0644, st_size=3D8778, ...}) =3D = 0=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/site.pyc", O_RDONLY) =3D = 5=0A= [pid 5073] fstat(5, {st_mode=3DS_IFREG|0666, st_size=3D9529, ...}) =3D = 0=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5073] read(5, = "*\353\r\n\310\274\232:c\0\0\0\0\t\0\0\0s\343\4\0\0\177"..., 4096) =3D = 4096=0A= [pid 5073] fstat(5, {st_mode=3DS_IFREG|0666, st_size=3D9529, ...}) =3D = 0=0A= [pid 5073] read(5, "\203\1\0}\1\0Wn = \0\177a\0\4t\5\0i\10\0j\n\0o\16\0\1\1\1"..., 4096) =3D 4096=0A= [pid 5073] read(5, "\0LICENSE.txts\7\0\0\0LICENSEs\5\0\0\0asc"..., = 4096) =3D 1337=0A= [pid 5073] read(5, "", 4096) =3D 0=0A= [pid 5073] brk(0x80ce000) =3D 0x80ce000=0A= [pid 5073] brk(0x80cf000) =3D 0x80cf000=0A= [pid 5073] brk(0x80d0000) =3D 0x80d0000=0A= [pid 5073] close(5) =3D 0=0A= [pid 5073] munmap(0x40013000, 4096) =3D 0=0A= [pid 5073] stat("/home/gvwilson/lib/python2.1/os", 0xbfffde20) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/os.so", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/osmodule.so", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/os.py", O_RDONLY) =3D = 5=0A= [pid 5073] open("/home/gvwilson/lib/python2.1", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 6=0A= [pid 5073] fstat(6, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) = =3D 0=0A= [pid 5073] fcntl(6, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5073] brk(0x80d2000) =3D 0x80d2000=0A= [pid 5073] getdents(6, /* 53 entries */, 3933) =3D 1168=0A= [pid 5073] getdents(6, /* 52 entries */, 3933) =3D 1156=0A= [pid 5073] close(6) =3D 0=0A= [pid 5073] fstat(5, {st_mode=3DS_IFREG|0644, st_size=3D16300, ...}) = =3D 0=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/os.pyc", O_RDONLY) =3D = 6=0A= [pid 5073] fstat(6, {st_mode=3DS_IFREG|0666, st_size=3D21279, ...}) = =3D 0=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5073] read(6, = "*\353\r\n\307\274\232:c\0\0\0\0\v\0\0\0s\336\10\0\0\177"..., 4096) =3D = 4096=0A= [pid 5073] fstat(6, {st_mode=3DS_IFREG|0666, st_size=3D21279, ...}) = =3D 0=0A= [pid 5073] brk(0x80d8000) =3D 0x80d8000=0A= [pid 5073] read(6, = "\0|\2\0\203\1\0\\\2\0}\2\0}\3\0n\1\0\1\177\257\0|\2\0o"..., 16384) =3D = 16384=0A= [pid 5073] read(6, "\0\0_spawnvefs\4\0\0\0paths\6\0\0\0spawnls"..., = 4096) =3D 799=0A= [pid 5073] read(6, "", 4096) =3D 0=0A= [pid 5073] brk(0x80d9000) =3D 0x80d9000=0A= [pid 5073] brk(0x80da000) =3D 0x80da000=0A= [pid 5073] brk(0x80db000) =3D 0x80db000=0A= [pid 5073] brk(0x80dc000) =3D 0x80dc000=0A= [pid 5073] brk(0x80dd000) =3D 0x80dd000=0A= [pid 5073] brk(0x80de000) =3D 0x80de000=0A= [pid 5073] brk(0x80e2000) =3D 0x80e2000=0A= [pid 5073] close(6) =3D 0=0A= [pid 5073] munmap(0x40013000, 4096) =3D 0=0A= [pid 5073] brk(0x80e3000) =3D 0x80e3000=0A= [pid 5073] stat("/home/gvwilson/lib/python2.1/posixpath", 0xbfffcfa4) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/posixpath.so", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/posixpathmodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/posixpath.py", O_RDONLY) = =3D 6=0A= [pid 5073] open("/home/gvwilson/lib/python2.1", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 7=0A= [pid 5073] fstat(7, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) = =3D 0=0A= [pid 5073] fcntl(7, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5073] brk(0x80e5000) =3D 0x80e5000=0A= [pid 5073] getdents(7, /* 53 entries */, 3933) =3D 1168=0A= [pid 5073] getdents(7, /* 52 entries */, 3933) =3D 1156=0A= [pid 5073] close(7) =3D 0=0A= [pid 5073] fstat(6, {st_mode=3DS_IFREG|0644, st_size=3D11111, ...}) = =3D 0=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/posixpath.pyc", = O_RDONLY) =3D 7=0A= [pid 5073] fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D12385, ...}) = =3D 0=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5073] read(7, = "*\353\r\n\307\274\232:c\0\0\0\0\31\0\0\0s\261\1\0\0\177"..., 4096) =3D = 4096=0A= [pid 5073] fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D12385, ...}) = =3D 0=0A= [pid 5073] read(7, = "ponents\0\0\0\0i\0\0\0\0i\1\0\0\0N(\6\0\0\0s\1\0\0\0"..., 8192) =3D = 8192=0A= [pid 5073] read(7, "lib/python2.1/posixpath.pys\1\0\0\0?"..., 4096) = =3D 97=0A= [pid 5073] read(7, "", 4096) =3D 0=0A= [pid 5073] brk(0x80e6000) =3D 0x80e6000=0A= [pid 5073] brk(0x80e7000) =3D 0x80e7000=0A= [pid 5073] brk(0x80ee000) =3D 0x80ee000=0A= [pid 5073] close(7) =3D 0=0A= [pid 5073] munmap(0x40013000, 4096) =3D 0=0A= [pid 5073] stat("/home/gvwilson/lib/python2.1/stat", 0xbfffc128) =3D = -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/stat.so", O_RDONLY) =3D = -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/statmodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/stat.py", O_RDONLY) =3D = 7=0A= [pid 5073] open("/home/gvwilson/lib/python2.1", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 8=0A= [pid 5073] fstat(8, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) = =3D 0=0A= [pid 5073] fcntl(8, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5073] getdents(8, /* 53 entries */, 3933) =3D 1168=0A= [pid 5073] getdents(8, /* 52 entries */, 3933) =3D 1156=0A= [pid 5073] getdents(8, /* 53 entries */, 3933) =3D 1172=0A= [pid 5073] close(8) =3D 0=0A= [pid 5073] fstat(7, {st_mode=3DS_IFREG|0644, st_size=3D1667, ...}) =3D = 0=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/stat.pyc", O_RDONLY) =3D = 8=0A= [pid 5073] fstat(8, {st_mode=3DS_IFREG|0666, st_size=3D3460, ...}) =3D = 0=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5073] read(8, = "*\353\r\n\311\274\232:c\0\0\0\0\1\0\0\0s\300\1\0\0\177"..., 4096) =3D = 3460=0A= [pid 5073] fstat(8, {st_mode=3DS_IFREG|0666, st_size=3D3460, ...}) =3D = 0=0A= [pid 5073] read(8, "", 4096) =3D 0=0A= [pid 5073] close(8) =3D 0=0A= [pid 5073] munmap(0x40013000, 4096) =3D 0=0A= [pid 5073] close(7) =3D 0=0A= [pid 5073] close(6) =3D 0=0A= [pid 5073] stat("/home/gvwilson/lib/python2.1/UserDict", 0xbfffcfa4) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/UserDict.so", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/UserDictmodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/UserDict.py", O_RDONLY) = =3D 6=0A= [pid 5073] open("/home/gvwilson/lib/python2.1", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 7=0A= [pid 5073] fstat(7, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) = =3D 0=0A= [pid 5073] fcntl(7, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5073] getdents(7, /* 53 entries */, 3933) =3D 1168=0A= [pid 5073] close(7) =3D 0=0A= [pid 5073] fstat(6, {st_mode=3DS_IFREG|0644, st_size=3D1573, ...}) =3D = 0=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/UserDict.pyc", O_RDONLY) = =3D 7=0A= [pid 5073] fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D4341, ...}) =3D = 0=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5073] read(7, = "*\353\r\n\302\274\232:c\0\0\0\0\3\0\0\0s&\0\0\0\177\0\0"..., 4096) =3D = 4096=0A= [pid 5073] fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D4341, ...}) =3D = 0=0A= [pid 5073] read(7, = "s\7\0\0\0popitem(\0\0\0\0(\0\0\0\0(\0\0\0\0s(\0\0\0"..., 4096) =3D = 245=0A= [pid 5073] read(7, "", 4096) =3D 0=0A= [pid 5073] brk(0x80ef000) =3D 0x80ef000=0A= [pid 5073] close(7) =3D 0=0A= [pid 5073] munmap(0x40013000, 4096) =3D 0=0A= [pid 5073] close(6) =3D 0=0A= [pid 5073] brk(0x80f0000) =3D 0x80f0000=0A= [pid 5073] brk(0x80f1000) =3D 0x80f1000=0A= [pid 5073] brk(0x80f2000) =3D 0x80f2000=0A= [pid 5073] brk(0x80f3000) =3D 0x80f3000=0A= [pid 5073] brk(0x80f4000) =3D 0x80f4000=0A= [pid 5073] brk(0x80f5000) =3D 0x80f5000=0A= [pid 5073] brk(0x80f6000) =3D 0x80f6000=0A= [pid 5073] brk(0x80f7000) =3D 0x80f7000=0A= [pid 5073] brk(0x80f8000) =3D 0x80f8000=0A= [pid 5073] brk(0x80f9000) =3D 0x80f9000=0A= [pid 5073] brk(0x80fa000) =3D 0x80fa000=0A= [pid 5073] brk(0x80fb000) =3D 0x80fb000=0A= [pid 5073] brk(0x80fc000) =3D 0x80fc000=0A= [pid 5073] brk(0x80fe000) =3D 0x80fe000=0A= [pid 5073] close(5) =3D 0=0A= [pid 5073] stat("/home/gvwilson/lib/python2.1/site-packages", = {st_mode=3DS_IFDIR|0755, st_size=3D1024, ...}) =3D 0=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/site-packages", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 5=0A= [pid 5073] fstat(5, {st_mode=3DS_IFDIR|0755, st_size=3D1024, ...}) =3D = 0=0A= [pid 5073] fcntl(5, F_SETFD, FD_CLOEXEC) =3D 0=0A= [pid 5073] getdents(5, /* 4 entries */, 3933) =3D 68=0A= [pid 5073] getdents(5, /* 0 entries */, 3933) =3D 0=0A= [pid 5073] close(5) =3D 0=0A= [pid 5073] stat("/home/gvwilson/lib/site-python", 0xbfffe9ec) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/usr/share/locale/locale.alias", O_RDONLY) =3D 5=0A= [pid 5073] fstat(5, {st_mode=3DS_IFREG|0644, st_size=3D2174, ...}) =3D = 0=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5073] read(5, "# Locale name alias data base.\n#"..., 4096) =3D = 2174=0A= [pid 5073] read(5, "", 4096) =3D 0=0A= [pid 5073] close(5) =3D 0=0A= [pid 5073] munmap(0x40013000, 4096) =3D 0=0A= [pid 5073] open("/usr/share/i18n/locale.alias", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= [pid 5073] open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/home/gvwilson/lib/python2.1/sitecustomize", = 0xbfffde20) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/sitecustomize.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/sitecustomizemodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/sitecustomize.py", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/sitecustomize.pyc", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = stat("/home/gvwilson/lib/python2.1/lib-dynload/sitecustomize", = 0xbfffde20) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/lib-dynload/sitecustomize.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/lib-dynload/sitecustomizemodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] open("/home/gvwilson/lib/python2.1/lib-dynload/sitecustomize= .py", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/lib-dynload/sitecustomize.pyc", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = stat("/home/gvwilson/lib/python2.1/plat-linux2/sitecustomize", = 0xbfffde20) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/plat-linux2/sitecustomize.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/plat-linux2/sitecustomizemodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/plat-linux2/sitecustomize.py", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/plat-linux2/sitecustomize.pyc", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] stat("/home/gvwilson/lib/python2.1/lib-tk/sitecustomize", = 0xbfffde20) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/lib-tk/sitecustomize.so", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/lib-tk/sitecustomizemodule.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/lib-tk/sitecustomize.py", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/lib-tk/sitecustomize.pyc", O_RDONLY) = =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = stat("/home/gvwilson/lib/python2.1/site-packages/sitecustomize", = 0xbfffde20) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/site-packages/sitecustomize.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/site-packages/sitecustomizemodule.so"= , O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/site-packages/sitecustomize.py", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] = open("/home/gvwilson/lib/python2.1/site-packages/sitecustomize.pyc", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= [pid 5073] close(4) =3D 0=0A= [pid 5073] readlink("tryout.py", 0xbffff748, 1024) =3D -1 EINVAL = (Invalid argument)=0A= [pid 5073] ioctl(3, TCGETS, 0xbffffa9c) =3D -1 ENOTTY (Inappropriate = ioctl for device)=0A= [pid 5073] fstat(3, {st_mode=3DS_IFREG|0666, st_size=3D20, ...}) =3D = 0=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5073] _llseek(3, 0, [0], SEEK_CUR) =3D 0=0A= [pid 5073] read(3, "print \"We made it!\"\n", 4096) =3D 20=0A= [pid 5073] _llseek(3, 20, [20], SEEK_SET) =3D 0=0A= [pid 5073] brk(0x8101000) =3D 0x8101000=0A= [pid 5073] read(3, "", 4096) =3D 0=0A= [pid 5073] close(3) =3D 0=0A= [pid 5073] munmap(0x40013000, 4096) =3D 0=0A= [pid 5073] fstat(1, {st_mode=3DS_IFIFO|0600, st_size=3D0, ...}) =3D = 0=0A= [pid 5073] mmap(0, 4096, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40013000=0A= [pid 5073] rt_sigaction(SIGINT, NULL, {0x40021460, [], 0x4000000}, 8) = =3D 0=0A= [pid 5073] rt_sigaction(SIGINT, {SIG_DFL}, NULL, 8) =3D 0=0A= [pid 5073] write(1, "We made it!\n", 12) =3D 12=0A= [pid 5071] <... read resumed> "We made it!\n", 8192) =3D 12=0A= [pid 5071] read(6, =0A= [pid 5073] munmap(0x40013000, 4096) =3D 0=0A= [pid 5073] _exit(0) =3D ?=0A= <... read resumed> "", 4096) =3D 0=0A= --- SIGCHLD (Child exited) ---=0A= read(6, "", 8192) =3D 0=0A= open("printing.html", O_RDONLY) =3D 3=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2",= {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/_= _init__.py", {st_mode=3DS_IFREG|0666, st_size=3D39, ...}) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/_= _init__", 0xbfffe354) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/_= _init__.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/_= _init__module.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/_= _init__.py", O_RDONLY) =3D 4=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2",= O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 7=0A= fstat(7, {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= fcntl(7, F_SETFD, FD_CLOEXEC) =3D 0=0A= brk(0x8132000) =3D 0x8132000=0A= getdents(7, /* 8 entries */, 3933) =3D 188=0A= close(7) =3D 0=0A= fstat(4, {st_mode=3DS_IFREG|0666, st_size=3D39, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/_= _init__.pyc", O_RDONLY) =3D 7=0A= fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D211, ...}) =3D 0=0A= mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = =3D 0x40142000=0A= read(7, "*\353\r\n\341\221\3109c\0\0\0\0\1\0\0\0s\20\0\0\0\177\0"..., = 4096) =3D 211=0A= fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D211, ...}) =3D 0=0A= read(7, "", 4096) =3D 0=0A= close(7) =3D 0=0A= munmap(0x40142000, 4096) =3D 0=0A= close(4) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/d= rv_pyexpat", 0xbfffe79c) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/d= rv_pyexpat.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/d= rv_pyexpatmodule.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/d= rv_pyexpat.py", O_RDONLY) =3D 4=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2",= O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 7=0A= fstat(7, {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= fcntl(7, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(7, /* 8 entries */, 3933) =3D 188=0A= close(7) =3D 0=0A= fstat(4, {st_mode=3DS_IFREG|0666, st_size=3D638, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/d= rv_pyexpat.pyc", O_RDONLY) =3D 7=0A= fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D407, ...}) =3D 0=0A= mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = =3D 0x40142000=0A= read(7, "*\353\r\nC\377\3209c\0\0\0\0\2\0\0\0s \0\0\0\177\0\0d\0"..., = 4096) =3D 407=0A= fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D407, ...}) =3D 0=0A= read(7, "", 4096) =3D 0=0A= close(7) =3D 0=0A= munmap(0x40142000, 4096) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/x= ml", 0xbfffd920) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/x= ml.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/x= mlmodule.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/x= ml.py", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/drivers2/x= ml.pyc", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/expatreade= r", 0xbfffd920) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/expatreade= r.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/expatreade= rmodule.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/expatreade= r.py", O_RDONLY) =3D 7=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 8=0A= fstat(8, {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= fcntl(8, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(8, /* 24 entries */, 3933) =3D 556=0A= close(8) =3D 0=0A= fstat(7, {st_mode=3DS_IFREG|0666, st_size=3D8257, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/expatreade= r.pyc", O_RDONLY) =3D 8=0A= fstat(8, {st_mode=3DS_IFREG|0666, st_size=3D12826, ...}) =3D 0=0A= mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = =3D 0x40142000=0A= read(8, "*\353\r\n|\216r:c\0\0\0\0\5\0\0\0sA\1\0\0\177\0\0d\0\0"..., = 4096) =3D 4096=0A= fstat(8, {st_mode=3DS_IFREG|0666, st_size=3D12826, ...}) =3D 0=0A= read(8, "\0\0\0resets\r\0\0\0_cont_handlers\r\0\0\0s"..., 8192) =3D = 8192=0A= read(8, "\16\0\0\0AttributesImpls\20\0\0\0Attribute"..., 4096) =3D = 538=0A= read(8, "", 4096) =3D 0=0A= close(8) =3D 0=0A= munmap(0x40142000, 4096) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/xml", = 0xbfffcaa4) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/xml.so", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/xmlmodule.= so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/xml.py", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/xml.pyc", = O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers", = {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/__init= __.py", {st_mode=3DS_IFREG|0666, st_size=3D43, ...}) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/__init= __", 0xbfffc65c) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/__init= __.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/__init= __module.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/__init= __.py", O_RDONLY) =3D 8=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 9=0A= fstat(9, {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= fcntl(9, F_SETFD, FD_CLOEXEC) =3D 0=0A= brk(0x8134000) =3D 0x8134000=0A= getdents(9, /* 11 entries */, 3933) =3D 228=0A= close(9) =3D 0=0A= fstat(8, {st_mode=3DS_IFREG|0666, st_size=3D43, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/__init= __.pyc", O_RDONLY) =3D 9=0A= fstat(9, {st_mode=3DS_IFREG|0666, st_size=3D221, ...}) =3D 0=0A= mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = =3D 0x40142000=0A= read(9, "*\353\r\nn\200\221:c\0\0\0\0\3\0\0\0s\31\0\0\0\177\0\0"..., = 4096) =3D 221=0A= fstat(9, {st_mode=3DS_IFREG|0666, st_size=3D221, ...}) =3D 0=0A= read(9, "", 4096) =3D 0=0A= close(9) =3D 0=0A= munmap(0x40142000, 4096) =3D 0=0A= close(8) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/expat"= , 0xbfffcaa4) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/expat.= so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/expatm= odule.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/expat.= py", O_RDONLY) =3D 8=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 9=0A= fstat(9, {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= fcntl(9, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(9, /* 11 entries */, 3933) =3D 228=0A= close(9) =3D 0=0A= fstat(8, {st_mode=3DS_IFREG|0666, st_size=3D112, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/expat.= pyc", O_RDONLY) =3D 9=0A= fstat(9, {st_mode=3DS_IFREG|0666, st_size=3D315, ...}) =3D 0=0A= mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = =3D 0x40142000=0A= read(9, "*\353\r\n\37s\3179c\0\0\0\0\1\0\0\0s#\0\0\0\177\0\0d\0"..., = 4096) =3D 315=0A= fstat(9, {st_mode=3DS_IFREG|0666, st_size=3D315, ...}) =3D 0=0A= read(9, "", 4096) =3D 0=0A= close(9) =3D 0=0A= munmap(0x40142000, 4096) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/pyexpa= t", 0xbfffbc28) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/pyexpa= t.so", O_RDONLY) =3D 9=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 10=0A= fstat(10, {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= fcntl(10, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(10, /* 11 entries */, 3933) =3D 228=0A= close(10) =3D 0=0A= fstat(9, {st_mode=3DS_IFREG|0777, st_size=3D380343, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/parsers/pyexpa= t.so", O_RDONLY) =3D 10=0A= fstat(10, {st_mode=3DS_IFREG|0777, st_size=3D380343, ...}) =3D 0=0A= read(10, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\0008\0"..., = 4096) =3D 4096=0A= mmap(0, 141796, PROT_READ|PROT_EXEC, MAP_PRIVATE, 10, 0) =3D = 0x40144000=0A= mprotect(0x40164000, 10724, PROT_NONE) =3D 0=0A= mmap(0x40164000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, = 10, 0x1f000) =3D 0x40164000=0A= close(10) =3D 0=0A= close(9) =3D 0=0A= close(8) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/saxutils",= 0xbfffcaa4) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/saxutils.s= o", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/saxutilsmo= dule.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/saxutils.p= y", O_RDONLY) =3D 8=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 9=0A= fstat(9, {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= fcntl(9, F_SETFD, FD_CLOEXEC) =3D 0=0A= brk(0x8136000) =3D 0x8136000=0A= getdents(9, /* 24 entries */, 3933) =3D 556=0A= close(9) =3D 0=0A= fstat(8, {st_mode=3DS_IFREG|0666, st_size=3D19814, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/saxutils.p= yc", O_RDONLY) =3D 9=0A= fstat(9, {st_mode=3DS_IFREG|0666, st_size=3D41178, ...}) =3D 0=0A= mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = =3D 0x40142000=0A= read(9, "*\353\r\n\370\216r:c\0\0\0\0\17\0\0\0s\211\2\0\0\177\0"..., = 4096) =3D 4096=0A= fstat(9, {st_mode=3DS_IFREG|0666, st_size=3D41178, ...}) =3D 0=0A= brk(0x8141000) =3D 0x8141000=0A= read(9, "\3\1\f\1c\2\0\2\0\4\0\3\0sJ\0\0\0\177I\0\177J\0|\0\0i\1"..., = 36864) =3D 36864=0A= read(9, "s\f\0\0\0_StringTypess\v\0\0\0ErrorRaise"..., 4096) =3D 218=0A= read(9, "", 4096) =3D 0=0A= brk(0x8142000) =3D 0x8142000=0A= brk(0x8143000) =3D 0x8143000=0A= brk(0x8144000) =3D 0x8144000=0A= brk(0x8145000) =3D 0x8145000=0A= brk(0x8146000) =3D 0x8146000=0A= brk(0x8147000) =3D 0x8147000=0A= brk(0x8148000) =3D 0x8148000=0A= brk(0x8149000) =3D 0x8149000=0A= brk(0x814a000) =3D 0x814a000=0A= brk(0x814b000) =3D 0x814b000=0A= brk(0x814c000) =3D 0x814c000=0A= brk(0x814d000) =3D 0x814d000=0A= brk(0x814e000) =3D 0x814e000=0A= brk(0x814f000) =3D 0x814f000=0A= close(9) =3D 0=0A= munmap(0x40142000, 4096) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/urllib", = 0xbfffbc28) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/urllib.so"= , O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/urllibmodu= le.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/urllib.py"= , O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/urllib.pyc= ", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= stat("urllib", 0xbfffbc28) =3D -1 ENOENT (No such file or = directory)=0A= open("urllib.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("urllibmodule.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("urllib.py", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("urllib.pyc", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= stat("/home/gvwilson/lib/python2.1/urllib", 0xbfffbc28) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/urllib.so", O_RDONLY) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/urllibmodule.so", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/urllib.py", O_RDONLY) =3D 9=0A= open("/home/gvwilson/lib/python2.1", O_RDONLY|O_NONBLOCK|O_DIRECTORY) = =3D 10=0A= fstat(10, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) =3D 0=0A= fcntl(10, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(10, /* 53 entries */, 3933) =3D 1168=0A= getdents(10, /* 52 entries */, 3933) =3D 1156=0A= getdents(10, /* 53 entries */, 3933) =3D 1172=0A= close(10) =3D 0=0A= fstat(9, {st_mode=3DS_IFREG|0644, st_size=3D46705, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/urllib.pyc", O_RDONLY) =3D 10=0A= fstat(10, {st_mode=3DS_IFREG|0666, st_size=3D54647, ...}) =3D 0=0A= mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = =3D 0x40142000=0A= read(10, "*\353\r\n\312\274\232:c\0\0\0\0\t\0\0\0s1\4\0\0\177\0\0"..., = 4096) =3D 4096=0A= fstat(10, {st_mode=3DS_IFREG|0666, st_size=3D54647, ...}) =3D 0=0A= brk(0x815d000) =3D 0x815d000=0A= read(10, "\n\0\177_\0d\5\0|\0\0i\v\0f\2\0g\1\0|\0\0_\f\0\177`\0g"..., = 49152) =3D 49152=0A= read(10, "itports\n\0\0\0_nportprogs\n\0\0\0splitn"..., 4096) =3D = 1399=0A= read(10, "", 4096) =3D 0=0A= brk(0x816a000) =3D 0x816a000=0A= brk(0x816b000) =3D 0x816b000=0A= brk(0x816c000) =3D 0x816c000=0A= brk(0x816d000) =3D 0x816d000=0A= close(10) =3D 0=0A= munmap(0x40142000, 4096) =3D 0=0A= stat("socket", 0xbfffadac) =3D -1 ENOENT (No such file or = directory)=0A= open("socket.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("socketmodule.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("socket.py", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("socket.pyc", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= stat("/home/gvwilson/lib/python2.1/socket", 0xbfffadac) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/socket.so", O_RDONLY) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/socketmodule.so", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/socket.py", O_RDONLY) =3D 10=0A= open("/home/gvwilson/lib/python2.1", O_RDONLY|O_NONBLOCK|O_DIRECTORY) = =3D 11=0A= fstat(11, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) =3D 0=0A= fcntl(11, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(11, /* 53 entries */, 3933) =3D 1168=0A= getdents(11, /* 52 entries */, 3933) =3D 1156=0A= getdents(11, /* 53 entries */, 3933) =3D 1172=0A= close(11) =3D 0=0A= fstat(10, {st_mode=3DS_IFREG|0644, st_size=3D7402, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/socket.pyc", O_RDONLY) =3D 11=0A= fstat(11, {st_mode=3DS_IFREG|0666, st_size=3D10086, ...}) =3D 0=0A= mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = =3D 0x40142000=0A= read(11, "*\353\r\n\310\274\232:c\0\0\0\0\5\0\0\0s\34\2\0\0\177\0"..., = 4096) =3D 4096=0A= fstat(11, {st_mode=3DS_IFREG|0666, st_size=3D10086, ...}) =3D 0=0A= read(11, "\0d\0\0S(\2\0\0\0Ni\0\0\0\0(\2\0\0\0s\4\0\0\0selfs\5"..., = 4096) =3D 4096=0A= read(11, "\1!\1\v\0\10\1\17\1\20\1\24\1\20\1\10\1\20\1\22\1\24\1"..., = 4096) =3D 1894=0A= read(11, "", 4096) =3D 0=0A= close(11) =3D 0=0A= munmap(0x40142000, 4096) =3D 0=0A= stat("_socket", 0xbfff9f30) =3D -1 ENOENT (No such file or = directory)=0A= open("_socket.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("_socketmodule.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("_socket.py", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("_socket.pyc", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= stat("/home/gvwilson/lib/python2.1/_socket", 0xbfff9f30) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/_socket.so", O_RDONLY) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/_socketmodule.so", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/_socket.py", O_RDONLY) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/_socket.pyc", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= stat("/home/gvwilson/lib/python2.1/lib-dynload/_socket", 0xbfff9f30) = =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/lib-dynload/_socket.so", O_RDONLY) = =3D 11=0A= open("/home/gvwilson/lib/python2.1/lib-dynload", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 12=0A= fstat(12, {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= fcntl(12, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(12, /* 54 entries */, 3933) =3D 1160=0A= close(12) =3D 0=0A= fstat(11, {st_mode=3DS_IFREG|0777, st_size=3D154739, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/lib-dynload/_socket.so", O_RDONLY) = =3D 12=0A= fstat(12, {st_mode=3DS_IFREG|0777, st_size=3D154739, ...}) =3D 0=0A= read(12, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\20\36\0"..., = 4096) =3D 4096=0A= mmap(0, 33140, PROT_READ|PROT_EXEC, MAP_PRIVATE, 12, 0) =3D = 0x40167000=0A= mprotect(0x4016d000, 8564, PROT_NONE) =3D 0=0A= mmap(0x4016d000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, = 12, 0x5000) =3D 0x4016d000=0A= close(12) =3D 0=0A= open("/usr/local/ace/ace/libssl.so.0", O_RDONLY) =3D -1 ENOENT (No such = file or directory)=0A= open("/etc/ld.so.cache", O_RDONLY) =3D 12=0A= fstat(12, {st_mode=3DS_IFREG|0644, st_size=3D25676, ...}) =3D 0=0A= mmap(0, 25676, PROT_READ, MAP_PRIVATE, 12, 0) =3D 0x40170000=0A= close(12) =3D 0=0A= open("/usr/lib/libssl.so.0", O_RDONLY) =3D 12=0A= fstat(12, {st_mode=3DS_IFREG|0755, st_size=3D181336, ...}) =3D 0=0A= read(12, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\340\227"..., = 4096) =3D 4096=0A= mmap(0, 182788, PROT_READ|PROT_EXEC, MAP_PRIVATE, 12, 0) =3D = 0x40177000=0A= mprotect(0x401a1000, 10756, PROT_NONE) =3D 0=0A= mmap(0x401a1000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, = 12, 0x29000) =3D 0x401a1000=0A= close(12) =3D 0=0A= open("/usr/local/ace/ace/libcrypto.so.0", O_RDONLY) =3D -1 ENOENT (No = such file or directory)=0A= open("/usr/lib/libcrypto.so.0", O_RDONLY) =3D 12=0A= fstat(12, {st_mode=3DS_IFREG|0755, st_size=3D788568, ...}) =3D 0=0A= read(12, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0`r\2\000"..., = 4096) =3D 4096=0A= mmap(0, 767656, PROT_READ|PROT_EXEC, MAP_PRIVATE, 12, 0) =3D = 0x401a4000=0A= mprotect(0x40256000, 38568, PROT_NONE) =3D 0=0A= mmap(0x40256000, 32768, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, = 12, 0xb1000) =3D 0x40256000=0A= mmap(0x4025e000, 5800, PROT_READ|PROT_WRITE, = MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) =3D 0x4025e000=0A= close(12) =3D 0=0A= mprotect(0x401a4000, 729088, PROT_READ|PROT_WRITE) =3D 0=0A= mprotect(0x401a4000, 729088, PROT_READ|PROT_EXEC) =3D 0=0A= munmap(0x40170000, 25676) =3D 0=0A= close(11) =3D 0=0A= uname({sys=3D"Linux", node=3D"akbar.nevex.com", ...}) =3D 0=0A= brk(0x816f000) =3D 0x816f000=0A= brk(0x8171000) =3D 0x8171000=0A= brk(0x8173000) =3D 0x8173000=0A= close(10) =3D 0=0A= close(9) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/urlparse",= 0xbfffbc28) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/urlparse.s= o", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/urlparsemo= dule.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/urlparse.p= y", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/urlparse.p= yc", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= stat("urlparse", 0xbfffbc28) =3D -1 ENOENT (No such file or = directory)=0A= open("urlparse.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("urlparsemodule.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("urlparse.py", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("urlparse.pyc", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= stat("/home/gvwilson/lib/python2.1/urlparse", 0xbfffbc28) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/urlparse.so", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/urlparsemodule.so", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/urlparse.py", O_RDONLY) =3D 9=0A= open("/home/gvwilson/lib/python2.1", O_RDONLY|O_NONBLOCK|O_DIRECTORY) = =3D 10=0A= fstat(10, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) =3D 0=0A= fcntl(10, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(10, /* 53 entries */, 3933) =3D 1168=0A= getdents(10, /* 52 entries */, 3933) =3D 1156=0A= getdents(10, /* 53 entries */, 3933) =3D 1172=0A= close(10) =3D 0=0A= fstat(9, {st_mode=3DS_IFREG|0644, st_size=3D8619, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/urlparse.pyc", O_RDONLY) =3D 10=0A= fstat(10, {st_mode=3DS_IFREG|0666, st_size=3D9090, ...}) =3D 0=0A= mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = =3D 0x40142000=0A= read(10, "*\353\r\n\312\274\232:c\0\0\0\0\16\0\0\0s\237\1\0\0\177"..., = 4096) =3D 4096=0A= fstat(10, {st_mode=3DS_IFREG|0666, st_size=3D9090, ...}) =3D 0=0A= read(10, "\0\0\fo\v\0\1\177\205\0|\1\0Sn\1\0\1\177\206\0|\1\0\fo"..., = 4096) =3D 4096=0A= read(10, "\3\0\0\0abss\7\0\0\0wrappeds\3\0\0\0len(\v\0\0\0"..., 4096) = =3D 898=0A= read(10, "", 4096) =3D 0=0A= close(10) =3D 0=0A= munmap(0x40142000, 4096) =3D 0=0A= close(9) =3D 0=0A= stat("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/codecs", = 0xbfffbc28) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/codecs.so"= , O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/codecsmodu= le.so", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/codecs.py"= , O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/site-packages/_xmlplus/sax/codecs.pyc= ", O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A= stat("codecs", 0xbfffbc28) =3D -1 ENOENT (No such file or = directory)=0A= open("codecs.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("codecsmodule.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("codecs.py", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("codecs.pyc", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= stat("/home/gvwilson/lib/python2.1/codecs", 0xbfffbc28) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/codecs.so", O_RDONLY) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/codecsmodule.so", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/codecs.py", O_RDONLY) =3D 9=0A= open("/home/gvwilson/lib/python2.1", O_RDONLY|O_NONBLOCK|O_DIRECTORY) = =3D 10=0A= fstat(10, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) =3D 0=0A= fcntl(10, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(10, /* 53 entries */, 3933) =3D 1168=0A= close(10) =3D 0=0A= fstat(9, {st_mode=3DS_IFREG|0644, st_size=3D17775, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/codecs.pyc", O_RDONLY) =3D 10=0A= fstat(10, {st_mode=3DS_IFREG|0666, st_size=3D22453, ...}) =3D 0=0A= mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = =3D 0x40142000=0A= read(10, "*\353\r\n\304\274\232:c\0\0\0\0\v\0\0\0s\325\1\0\0\177"..., = 4096) =3D 4096=0A= fstat(10, {st_mode=3DS_IFREG|0666, st_size=3D22453, ...}) =3D 0=0A= brk(0x8179000) =3D 0x8179000=0A= read(10, "ors(\0\0\0\0(\0\0\0\0s&\0\0\0/home/gvwilson"..., 16384) =3D = 16384=0A= read(10, "e given to define the error hand"..., 4096) =3D 1973=0A= read(10, "", 4096) =3D 0=0A= brk(0x817a000) =3D 0x817a000=0A= brk(0x817b000) =3D 0x817b000=0A= brk(0x817c000) =3D 0x817c000=0A= brk(0x817d000) =3D 0x817d000=0A= brk(0x817e000) =3D 0x817e000=0A= brk(0x817f000) =3D 0x817f000=0A= close(10) =3D 0=0A= munmap(0x40142000, 4096) =3D 0=0A= stat("struct", 0xbfffadac) =3D -1 ENOENT (No such file or = directory)=0A= open("struct.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("structmodule.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("struct.py", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("struct.pyc", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= stat("/home/gvwilson/lib/python2.1/struct", 0xbfffadac) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/struct.so", O_RDONLY) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/structmodule.so", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/struct.py", O_RDONLY) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/struct.pyc", O_RDONLY) =3D -1 ENOENT = (No such file or directory)=0A= stat("/home/gvwilson/lib/python2.1/lib-dynload/struct", 0xbfffadac) =3D = -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/lib-dynload/struct.so", O_RDONLY) = =3D 10=0A= open("/home/gvwilson/lib/python2.1/lib-dynload", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 11=0A= fstat(11, {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= fcntl(11, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(11, /* 54 entries */, 3933) =3D 1160=0A= close(11) =3D 0=0A= fstat(10, {st_mode=3DS_IFREG|0777, st_size=3D66988, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/lib-dynload/struct.so", O_RDONLY) = =3D 11=0A= fstat(11, {st_mode=3DS_IFREG|0777, st_size=3D66988, ...}) =3D 0=0A= read(11, = "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0P\17\0\000"..., 4096) = =3D 4096=0A= mmap(0, 19000, PROT_READ|PROT_EXEC, MAP_PRIVATE, 11, 0) =3D = 0x40170000=0A= mprotect(0x40173000, 6712, PROT_NONE) =3D 0=0A= mmap(0x40173000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 11, = 0x2000) =3D 0x40173000=0A= close(11) =3D 0=0A= close(10) =3D 0=0A= stat("_codecs", 0xbfffadac) =3D -1 ENOENT (No such file or = directory)=0A= open("_codecs.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("_codecsmodule.so", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("_codecs.py", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= open("_codecs.pyc", O_RDONLY) =3D -1 ENOENT (No such file or = directory)=0A= stat("/home/gvwilson/lib/python2.1/_codecs", 0xbfffadac) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/_codecs.so", O_RDONLY) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/_codecsmodule.so", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/_codecs.py", O_RDONLY) =3D -1 ENOENT = (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/_codecs.pyc", O_RDONLY) =3D -1 = ENOENT (No such file or directory)=0A= stat("/home/gvwilson/lib/python2.1/lib-dynload/_codecs", 0xbfffadac) = =3D -1 ENOENT (No such file or directory)=0A= open("/home/gvwilson/lib/python2.1/lib-dynload/_codecs.so", O_RDONLY) = =3D 10=0A= open("/home/gvwilson/lib/python2.1/lib-dynload", = O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 11=0A= fstat(11, {st_mode=3DS_IFDIR|0777, st_size=3D1024, ...}) =3D 0=0A= fcntl(11, F_SETFD, FD_CLOEXEC) =3D 0=0A= getdents(11, /* 54 entries */, 3933) =3D 1160=0A= close(11) =3D 0=0A= fstat(10, {st_mode=3DS_IFREG|0777, st_size=3D49229, ...}) =3D 0=0A= open("/home/gvwilson/lib/python2.1/lib-dynload/_codecs.so", O_RDONLY) = =3D 11=0A= fstat(11, {st_mode=3DS_IFREG|0777, st_size=3D49229, ...}) =3D 0=0A= read(11, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0p\r\0\000"..., = 4096) =3D 4096=0A= mmap(0, 12844, PROT_READ|PROT_EXEC, MAP_PRIVATE, 11, 0) =3D = 0x40260000=0A= mprotect(0x40262000, 4652, PROT_NONE) =3D 0=0A= mmap(0x40262000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 11, = 0x1000) =3D 0x40262000=0A= close(11) =3D 0=0A= close(10) =3D 0=0A= close(9) =3D 0=0A= close(8) =3D 0=0A= close(7) =3D 0=0A= close(4) =3D 0=0A= brk(0x8190000) =3D 0x8190000=0A= read(3, "\n\n
=0A=
[pid  5071] read(7,  =0A=
[pid  5074] <... close resumed> )       =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(125)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(126)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(127)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(128)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(129)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(130)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(131)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(132)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(133)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(134)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(135)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(136)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(137)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(138)                  =3D -1 EBADF (Bad file descripto=
r)=0A=
[pid  5074] close(139)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(140)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(141)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(142)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(143)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(144)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(145)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(146)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(147)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(148)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(149)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(150)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(151)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(152)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(153)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(154)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(155)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(156)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(157)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(158)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(159)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(160)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(161)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(162)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(163)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(164)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(165)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(166)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(167)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(168)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(169)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(170)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(171)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(172)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(173)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(174)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(175)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(176)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(177)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(178)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(179)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(180)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(181)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(182)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(183)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(184)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(185)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(186)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(187)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(188)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(189)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(190)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(191)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(192)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(193)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(194)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(195)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(196)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(197)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(198)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(199)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(200)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(201)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(202)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(203)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(204)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(205)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(206)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(207)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(208)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(209)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(210)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(211)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(212)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(213)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(214)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(215)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(216)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(217)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(218)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(219)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(220)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(221)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(222)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(223)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(224)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(225)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(226)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(227)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(228)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(229)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(230)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(231)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(232)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(233)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(234)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(235)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(236)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(237)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(238)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(239)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(240)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(241)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(242)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(243)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(244)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(245)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(246)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(247)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(248)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(249)                  =3D -1 EBADF (Bad file descripto=
r)=0A=
[pid  5074] close(250)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(251)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(252)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(253)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(254)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] close(255)                  =3D -1 EBADF (Bad file =
descriptor)=0A=
[pid  5074] stat("tempfile", 0xbfffdbb0) =3D -1 ENOENT (No such file or =
directory)=0A=
[pid  5074] open("tempfile.so", O_RDONLY) =3D -1 ENOENT (No such file =
or directory)=0A=
[pid  5074] open("tempfilemodule.so", O_RDONLY) =3D -1 ENOENT (No such =
file or directory)=0A=
[pid  5074] open("tempfile.py", O_RDONLY) =3D -1 ENOENT (No such file =
or directory)=0A=
[pid  5074] open("tempfile.pyc", O_RDONLY) =3D -1 ENOENT (No such file =
or directory)=0A=
[pid  5074] stat("/home/gvwilson/lib/python2.1/tempfile", 0xbfffdbb0) =
=3D -1 ENOENT (No such file or directory)=0A=
[pid  5074] open("/home/gvwilson/lib/python2.1/tempfile.so", O_RDONLY) =
=3D -1 ENOENT (No such file or directory)=0A=
[pid  5074] open("/home/gvwilson/lib/python2.1/tempfilemodule.so", =
O_RDONLY) =3D -1 ENOENT (No such file or directory)=0A=
[pid  5074] open("/home/gvwilson/lib/python2.1/tempfile.py", O_RDONLY) =
=3D 3=0A=
[pid  5074] open("/home/gvwilson/lib/python2.1", =
O_RDONLY|O_NONBLOCK|O_DIRECTORY) =3D 4=0A=
[pid  5074] fstat(4, {st_mode=3DS_IFDIR|0755, st_size=3D10240, ...}) =
=3D 0=0A=
[pid  5074] fcntl(4, F_SETFD, FD_CLOEXEC) =3D 0=0A=
[pid  5074] getdents(4, /* 53 entries */, 3933) =3D 1168=0A=
[pid  5074] getdents(4, /* 52 entries */, 3933) =3D 1156=0A=
[pid  5074] getdents(4, /* 53 entries */, 3933) =3D 1172=0A=
[pid  5074] close(4)                    =3D 0=0A=
[pid  5074] fstat(3, {st_mode=3DS_IFREG|0644, st_size=3D6279, ...}) =3D =
0=0A=
[pid  5074] open("/home/gvwilson/lib/python2.1/tempfile.pyc", O_RDONLY) =
=3D 4=0A=
[pid  5074] fstat(4, {st_mode=3DS_IFREG|0666, st_size=3D7208, ...}) =3D =
0=0A=
[pid  5074] mmap(0, 4096, PROT_READ|PROT_WRITE, =
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40142000=0A=
[pid  5074] read(4, =
"*\353\r\n\311\274\232:c\0\0\0\0\7\0\0\0s\212\1\0\0\177"..., 4096) =3D =
4096=0A=
[pid  5074] fstat(4, {st_mode=3DS_IFREG|0666, st_size=3D7208, ...}) =3D =
0=0A=
[pid  5074] read(4, =
"(\1\0\0\0s\4\0\0\0self(\0\0\0\0(\0\0\0\0s(\0\0\0/ho"..., 4096) =3D =
3112=0A=
[pid  5074] read(4, "", 4096)           =3D 0=0A=
[pid  5074] close(4)                    =3D 0=0A=
[pid  5074] munmap(0x40142000, 4096)    =3D 0=0A=
[pid  5074] close(3)                    =3D 0=0A=
[pid  5074] getcwd("/a/akbar/home/gvwilson/p2", 1026) =3D 26=0A=
[pid  5074] getpid()                    =3D 5074=0A=
[pid  5074] open("/var/tmp/@5074.test", O_RDWR|O_CREAT|O_EXCL, 0700) =
=3D 3=0A=
[pid  5074] fcntl(3, F_GETFL)           =3D 0x2 (flags O_RDWR)=0A=
[pid  5074] fstat(3, {st_mode=3DS_IFREG|0700, st_size=3D0, ...}) =3D =
0=0A=
[pid  5074] mmap(0, 4096, PROT_READ|PROT_WRITE, =
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =3D 0x40142000=0A=
[pid  5074] _llseek(3, 0, [0], SEEK_CUR) =3D 0=0A=
[pid  5074] write(3, "blat", 4)         =3D 4=0A=
[pid  5074] close(3)                    =3D 0=0A=
[pid  5074] munmap(0x40142000, 4096)    =3D 0=0A=
[pid  5074] unlink("/var/tmp/@5074.test") =3D 0=0A=
[pid  5074] getpid()                    =3D 5074=0A=
[pid  5074] stat("/var/tmp/@5074.0", 0xbfffe67c) =3D -1 ENOENT (No such =
file or directory)=0A=
[pid  5074] getpid()                    =3D 5074=0A=
[pid  5074] rt_sigaction(SIGRT_0, {SIG_DFL}, NULL, 8) =3D 0=0A=
[pid  5074] rt_sigaction(SIGRT_1, {SIG_DFL}, NULL, 8) =3D 0=0A=
[pid  5074] rt_sigaction(SIGRT_2, {SIG_DFL}, NULL, 8) =3D 0=0A=
[pid  5074] execve("/var/tmp/@5074.0", ["blah"], [/* 30 vars */]) =3D =
-1 ENOENT (No such file or directory)=0A=
[pid  5074] _exit(1)                    =3D ?=0A=
<... read resumed> "", 8192)            =3D 0=0A=
--- SIGCHLD (Child exited) ---=0A=
read(3, "", 65516)                      =3D 0=0A=
close(3)                                =3D 0=0A=
rt_sigaction(SIGINT, NULL, {0x40021460, [], 0x4000000}, 8) =3D 0=0A=
rt_sigaction(SIGINT, {SIG_DFL}, NULL, 8) =3D 0=0A=
close(5)                                =3D 0=0A=
munmap(0x40141000, 4096)                =3D 0=0A=
close(7)                                =3D 0=0A=
munmap(0x40143000, 4096)                =3D 0=0A=
write(1, "Running popen2 directly, result "..., 228Running popen2 =
directly, result is ['We made it!\n']=0A=
using just a class, shell command is 'python tryout.py'=0A=
using just a class, result is ['We made it!\n']=0A=
using SAX, shell command is 'python tryout.py'=0A=
using SAX, result is []=0A=
) =3D 228=0A=
munmap(0x40019000, 4096)                =3D 0=0A=
_exit(0)                                =3D ?=0A=

------_=_NextPart_000_01C0A04B.2E82C5CC
Content-Type: application/octet-stream;
	name="tryout.py"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
	filename="tryout.py"

print "We made it!"=0A=

------_=_NextPart_000_01C0A04B.2E82C5CC--


From paulp@ActiveState.com  Mon Feb 26 23:42:38 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Mon, 26 Feb 2001 15:42:38 -0800
Subject: [Python-Dev] first correct explanation wins a beer...
References: <930BBCA4CEBBD411BE6500508BB3328F1ABF07@nsamcanms1.ca.baltimore.com>
Message-ID: <3A9AE9EE.EBB27F89@ActiveState.com>

My guess: Unicode. Try casting to an 8-bit string and see what happens.
-- 
Vote for Your Favorite Python & Perl Programming  
Accomplishments in the first Active Awards! 
http://www.ActiveState.com/Awards


From tim.one@home.com  Tue Feb 27 01:18:37 2001
From: tim.one@home.com (Tim Peters)
Date: Mon, 26 Feb 2001 20:18:37 -0500
Subject: [Python-Dev] PEP 236:  Back to the __future__
Message-ID: 

The text of this PEP can also be found online, at:

    http://python.sourceforge.net/peps/pep-0236.html


PEP: 236
Title: Back to the __future__
Version: $Revision: 1.2 $
Author: Tim Peters 
Python-Version: 2.1
Status: Active
Type: Standards Track
Created: 26-Feb-2001
Post-History: 26-Feb-2001


Motivation

    From time to time, Python makes an incompatible change to the
    advertised semantics of core language constructs, or changes their
    accidental (implementation-dependent) behavior in some way.  While this
    is never done capriciously, and is always done with the aim of
    improving the language over the long term, over the short term it's
    contentious and disrupting.

    The "Guidelines for Language Evolution" PEP [1] suggests ways to ease
    the pain, and this PEP introduces some machinery in support of that.

    The "Statically Nested Scopes" PEP [2] is the first application, and
    will be used as an example here.


Intent

    [Note:  This is policy, and so should eventually move into PEP 5[1]]

    When an incompatible change to core language syntax or semantics is
    being made:

    1. The release C that introduces the change does not change the
       syntax or semantics by default.

    2. A future release R is identified in which the new syntax or semantics
       will be enforced.

    3. The mechanisms described in the "Warning Framework" PEP [3] are used
       to generate warnings, whenever possible, about constructs or
       operations whose meaning may[4] change in release R.

    4. The new future_statement (see below) can be explicitly included in a
       module M to request that the code in module M use the new syntax or
       semantics in the current release C.

    So old code continues to work by default, for at least one release,
    although it may start to generate new warning messages.  Migration to
    the new syntax or semantics can proceed during that time, using the
    future_statement to make modules containing it act as if the new syntax
    or semantics were already being enforced.

    Note that there is no need to involve the future_statement machinery
    in new features unless they can break existing code; fully backward-
    compatible additions can-- and should --be introduced without a
    corresponding future_statement.


Syntax

    A future_statement is simply a from/import statement using the reserved
    module name __future__:

        future_statement: "from" "__future__" "import" feature ["as" name]
                          ("," feature ["as" name])*

        feature: identifier
        name: identifier

    In addition, all future_statments must appear near the top of the
    module.  The only lines that can appear before a future_statement are:

    + The module docstring (if any).
    + Comments.
    + Blank lines.
    + Other future_statements.

    Example:
        """This is a module docstring."""

        # This is a comment, preceded by a blank line and followed by
        # a future_statement.
        from __future__ import nested_scopes

        from math import sin
        from __future__ import alabaster_weenoblobs  # compile-time error!
        # That was an error because preceded by a non-future_statement.


Semantics

    A future_statement is recognized and treated specially at compile time:
    changes to the semantics of core constructs are often implemented by
    generating different code.  It may even be the case that a new feature
    introduces new incompatible syntax (such as a new reserved word), in
    which case the compiler may need to parse the module differently.  Such
    decisions cannot be pushed off until runtime.

    For any given release, the compiler knows which feature names have been
    defined, and raises a compile-time error if a future_statement contains
    a feature not known to it[5].

    The direct runtime semantics are the same as for any import statement:
    there is a standard module __future__.py, described later, and it will
    be imported in the usual way at the time the future_statement is
    executed.

    The *interesting* runtime semantics depend on the specific feature(s)
    "imported" by the future_statement(s) appearing in the module.

    Note that there is nothing special about the statement:

        import __future__ [as name]

    That is not a future_statement; it's an ordinary import statement, with
    no special semantics or syntax restrictions.


Example

    Consider this code, in file scope.py:

        x = 42
        def f():
            x = 666
            def g():
                print "x is", x
            g()
        f()

    Under 2.0, it prints:

        x is 42

    Nested scopes[2] are being introduced in 2.1.  But under 2.1, it still
    prints

        x is 42

    and also generates a warning.

    In 2.2, and also in 2.1 *if* "from __future__ import nested_scopes" is
    included at the top of scope.py, it prints

        x is 666


Standard Module __future__.py

    Lib/__future__.py is a real module, and serves three purposes:

    1. To avoid confusing existing tools that analyze import statements and
       expect to find the modules they're importing.

    2. To ensure that future_statements run under releases prior to 2.1
       at least yield runtime exceptions (the import of __future__ will
       fail, because there was no module of that name prior to 2.1).

    3. To document when incompatible changes were introduced, and when they
       will be-- or were --made mandatory.  This is a form of executable
       documentation, and can be inspected programatically via importing
       __future__ and examining its contents.

    Each statment in __future__.py is of the form:

        FeatureName = ReleaseInfo

    ReleaseInfo is a pair of the form:

         (OptionalRelease, MandatoryRelease)

    where, normally, OptionalRelease <  MandatoryRelease, and both are
    5-tuples of the same form as sys.version_info:

    (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int
     PY_MINOR_VERSION, # the 1; an int
     PY_MICRO_VERSION, # the 0; an int
     PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string
     PY_RELEASE_SERIAL # the 3; an int
    )

    OptionalRelease records the first release in which

        from __future__ import FeatureName

    was accepted.

    In the case of MandatoryReleases that have not yet occurred,
    MandatoryRelease predicts the release in which the feature will become
    part of the language.

    Else MandatoryRelease records when the feature became part of the
    language; in releases at or after that, modules no longer need

        from __future__ import FeatureName

    to use the feature in question, but may continue to use such imports.

    MandatoryRelease may also be None, meaning that a planned feature got
    dropped.

    No line will ever be deleted from __future__.py.

    Example line:

        nested_scopes = (2, 1, 0, "beta", 1), (2, 2, 0, "final", 0)

    This means that

        from __future__ import nested_scopes

    will work in all releases at or after 2.1b1, and that nested_scopes are
    intended to be enforced starting in release 2.2.


Unresolved Problems:  Runtime Compilation

    Several Python features can compile code during a module's runtime:

    1. The exec statement.
    2. The execfile() function.
    3. The compile() function.
    4. The eval() function.
    5. The input() function.

    Since a module M containing a future_statement naming feature F
    explicitly requests that the current release act like a future release
    with respect to F, any code compiled dynamically from text passed to
    one of these from within M should probably also use the new syntax or
    semantics associated with F.

    This isn't always desired, though.  For example, doctest.testmod(M)
    compiles examples taken from strings in M, and those examples should
    use M's choices, not necessarily doctest module's choices.

    It's unclear what to do about this.  The initial release (2.1b1) is
    likely to ignore these issues, saying that each dynamic compilation
    starts over from scratch (i.e., as if no future_statements had been
    specified).

    In any case, a future_statement appearing "near the top" (see Syntax
    above) of text compiled dynamically by an exec, execfile() or compile()
    applies to the code block generated, but has no further effect on the
    module that executes such an exec, execfile() or compile().  This
    can't be used to affect eval() or input(), however, because they only
    allow expression input, and a future_statement is not an expression.


Unresolved Problems:  Interactive Shells

    An interactive shell can be seen as an extreme case of runtime
    compilation (see above):  in effect, each statement typed at an
    interactive shell prompt runs a new instance of exec, compile() or
    execfile().  The initial release (2.1b1) is likely to be such that
    future_statements typed at an interactive shell have no effect beyond
    their runtime meaning as ordinary import statements.

    It would make more sense if a future_statement typed at an interactive
    shell applied to the rest of the shell session's life, as if the
    future_statement had appeared at the top of a module.  Again, it's
    unclear what to do about this.


Questions and Answers

    Q:  What about a "from __past__" version, to get back *old* behavior?

    A:  Outside the scope of this PEP.  Seems unlikely to the author,
        though.  Write a PEP if you want to pursue it.

    Q:  What about incompatibilites due to changes in the Python virtual
        machine?

    A:  Outside the scope of this PEP, although PEP 5[1] suggests a grace
        period there too, and the future_statement may also have a role to
        play there.

    Q:  What about incompatibilites due to changes in Python's C API?

    A:  Outside the scope of this PEP.

    Q:  I want to wrap future_statements in try/except blocks, so I can
        use different code depending on which version of Python I'm running.
        Why can't I?

    A:  Sorry!  try/except is a runtime feature; future_statements are
        primarily compile-time gimmicks, and your try/except happens long
        after the compiler is done.  That is, by the time you do
        try/except, the semantics in effect for the module are already a
        done deal.  Since the try/except wouldn't accomplish what it
        *looks* like it should accomplish, it's simply not allowed.  We
        also want to keep these special statements very easy to find and to
        recognize.

        Note that you *can* import __future__ directly, and use the
        information in it, along with sys.version_info, to figure out where
        the release you're running under stands in relation to a given
        feature's status.

     Q: Going back to the nested_scopes example, what if release 2.2 comes
        along and I still haven't changed my code?  How can I keep the 2.1
        behavior then?

     A: By continuing to use 2.1, and not moving to 2.2 until you do change
        your code.  The purpose of future_statement is to make life easier
        for people who keep keep current with the latest release in a timely
        fashion.  We don't hate you if you don't, but your problems are
        much harder to solve, and somebody with those problems will need to
        write a PEP addressing them.  future_statement is aimed at a
        different audience.


Copyright

    This document has been placed in the public domain.


References and Footnotes

    [1] http://python.sourceforge.net/peps/pep-0005.html

    [2] http://python.sourceforge.net/peps/pep-0227.html

    [3] http://python.sourceforge.net/peps/pep-0230.html

    [4] Note that this is "may" and not "will":  better safe than sorry.  Of
        course spurious warnings won't be generated when avoidable with
        reasonable cost.

    [5] This ensures that a future_statement run under a release prior to
        the first one in which a given feature is known (but >= 2.1) will
        raise a compile-time error rather than silently do a wrong thing.
        If transported to a release prior to 2.1, a runtime error will be
        raised because of the failure to import __future__ (no such module
        existed in the standard distribution before the 2.1 release, and
        the double underscores make it a reserved name).


Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:



From martin@loewis.home.cs.tu-berlin.de  Tue Feb 27 06:52:27 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 27 Feb 2001 07:52:27 +0100
Subject: [Python-Dev] first correct explanation wins a beer...
Message-ID: <200102270652.f1R6qRA00896@mira.informatik.hu-berlin.de>

> My guess: Unicode. Try casting to an 8-bit string and see what happens.

Paul is right, so I guess you owe him a beer...

To see this in more detail, compare

popen2.Popen3("/bin/ls").fromchild.readlines()

to

popen2.Popen3(u"/bin/ls").fromchild.readlines()

Specifically, it seems the problem is 

    def _run_child(self, cmd):
        if type(cmd) == type(''):
            cmd = ['/bin/sh', '-c', cmd]

in popen2. I still think there should be types.isstring function, and
then this fragment should read

    def _run_child(self, cmd):
        if types.isstring(cmd):
            cmd = ['/bin/sh', '-c', cmd]

Now, if somebody would put "funny characters" into cmd, it would still
give an error, which is then almost silently ignored, due to the 

        try:
            os.execvp(cmd[0], cmd)
        finally:
            os._exit(1)

fragment. Perhaps it would be better to put 

       if type(cmd) == types.UnicodeType:
          cmd = cmd.encode("ascii")

into Popen3.__init__, so you'd get an error if you pass those funny
characters.

Regards,
Martin


From ping@lfw.org  Tue Feb 27 07:52:28 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Mon, 26 Feb 2001 23:52:28 -0800 (PST)
Subject: [Python-Dev] pydoc for 2.1b1?
Message-ID: 

Hi!

It's my birthday today, and i think it would be a really awesome
present if pydoc.py were to be accepted into the distribution. :)

(Not just because it's my birthday, though.  I would hope it is
worth accepting based on its own merits.)

The most recent version of pydoc is just a single file, for the
easiest possible setup -- zero installation effort.  You only need
the "inspect" module to run it.  You can find it under the CVS tree
at nondist/sandbox/help/pydoc.py or download it from

    http://www.lfw.org/python/pydoc.py
    http://www.lfw.org/python/inspect.py

Among other things, it now handles a few corner cases better, the
formatting is a bit improved, and you can now tell it to write out
the documentation to files on disk if that's your fancy (it can
still display the documentation interactively in your shell or your
web browser).


-- ?!ng




From ping@lfw.org  Tue Feb 27 11:53:08 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 27 Feb 2001 03:53:08 -0800 (PST)
Subject: [Python-Dev] A few small issues
Message-ID: 

Hi.  Here are some things i noticed tonight.


1.  The error message for UnboundLocalError isn't really accurate.

    >>> def f():
    ...     x = 1
    ...     del x
    ...     print x
    ... 
    >>> f()
    Traceback (most recent call last):
      File "
             ", line 1, in ? File "
             
              ", line 4, in f UnboundLocalError: local variable 'x' referenced before assignment >>> It's not a question of the variable being referenced "before assignment" -- it's just that the variable is undefined. Better would be a straightforward message such as UnboundLocalError: local name 'x' is not defined This message would be consistent with the others: NameError: name 'x' is not defined NameError: global name 'x' is not defined 2. Why does imp.find_module('') succeed? >>> import imp >>> imp.find_module('') (None, '/home/ping/python/', ('', '', 5)) I think it should fail with "empty module name" or something similar. 3. Normally when a script is run, it looks like '' gets prepended to sys.path so that the current directory will be searched. But if the script being run is a symlink, the symlink is resolved first to an actual file, and the directory containing that file is prepended to sys.path. This leads to strange behaviour: localhost[1004]% cat > spam.py bacon = 5 localhost[1005]% cat > /tmp/eggs.py import spam localhost[1006]% ln -s /tmp/eggs.py . localhost[1007]% python eggs.py Traceback (most recent call last): File "eggs.py", line 1, in ? import spam ImportError: No module named spam localhost[1008]% python Python 2.1a2 (#23, Feb 11 2001, 16:26:17) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import spam >>> (whereupon the confused programmer says, "Huh? If *i* could import spam, why couldn't eggs?"). Was this a design decision? Should it be changed to always prepend ''? 4. As far as i can tell, the curses.wrapper package is inaccessible. It's obscured by a curses.wrapper() function in the curses package. >>> import curses.wrapper >>> curses.wrapper 
              
               >>> import sys >>> sys.modules['curses.wrapper'] 
               
                I don't see any way around this other than renaming curses.wrapper. -- ?!ng "If I have not seen as far as others, it is because giants were standing on my shoulders." -- Hal Abelson From thomas@xs4all.net Tue Feb 27 13:10:20 2001 From: thomas@xs4all.net (Thomas Wouters) Date: Tue, 27 Feb 2001 14:10:20 +0100 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: 
                
                 ; from ping@lfw.org on Mon, Feb 26, 2001 at 11:52:28PM -0800 References: 
                 
                  Message-ID: <20010227141020.B9678@xs4all.nl> On Mon, Feb 26, 2001 at 11:52:28PM -0800, Ka-Ping Yee wrote: > It's my birthday today, and i think it would be a really awesome > present if pydoc.py were to be accepted into the distribution. :) It has my vote ;) I think pydoc serves two purposes: it's a useful tool, especially if we can get it accepted by the larger community (get it mentioned on python-list by non-dev'ers, get it mentioned in books, etc.) And it serves as a great example on how to do things like introspection. -- Thomas Wouters 
                  
                   Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From guido@digicool.com Tue Feb 27 02:08:36 2001 From: guido@digicool.com (Guido van Rossum) Date: Mon, 26 Feb 2001 21:08:36 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: Your message of "Mon, 26 Feb 2001 23:52:28 PST." 
                   
                    References: 
                    
                     Message-ID: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> > It's my birthday today, and i think it would be a really awesome > present if pydoc.py were to be accepted into the distribution. :) Congratulations, Ping. > (Not just because it's my birthday, though. I would hope it is > worth accepting based on its own merits.) No, it's being accepted because your name is Ping. I just read the first few pages of the script for Monty Python's Meaning of Life, which figures a "machine that goes 'Ping'". That makes your name sufficiently Pythonic. > The most recent version of pydoc is just a single file, for the > easiest possible setup -- zero installation effort. You only need > the "inspect" module to run it. You can find it under the CVS tree > at nondist/sandbox/help/pydoc.py or download it from > > http://www.lfw.org/python/pydoc.py > http://www.lfw.org/python/inspect.py > > Among other things, it now handles a few corner cases better, the > formatting is a bit improved, and you can now tell it to write out > the documentation to files on disk if that's your fancy (it can > still display the documentation interactively in your shell or your > web browser). You can check these into the regular tree. I guess they both go into the Lib directory, right? Make sure pydoc.py is checked in with +x permissions. I'll see if we can import pydoc.help into __builtin__ in interactive mode. Now let's paaaartaaaay! --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin@mems-exchange.org Tue Feb 27 15:02:28 2001 From: akuchlin@mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 10:02:28 -0500 Subject: [Python-Dev] A few small issues In-Reply-To: 
                     
                      ; from ping@lfw.org on Tue, Feb 27, 2001 at 03:53:08AM -0800 References: 
                      
                       Message-ID: <20010227100228.A17362@ute.cnri.reston.va.us> On Tue, Feb 27, 2001 at 03:53:08AM -0800, Ka-Ping Yee wrote: >4. As far as i can tell, the curses.wrapper package is inaccessible. > It's obscured by a curses.wrapper() function in the curses package. The function in the packages results from 'from curses.wrapper import wrapper', so there's really no need to import curses.wrapper directly. Hmmm... but the module is documented in the library reference. I could move the definition of wrapper() into the __init__.py and change the docs, if that's desired. --amk From skip@mojam.com (Skip Montanaro) Tue Feb 27 15:48:14 2001 From: skip@mojam.com (Skip Montanaro) (Skip Montanaro) Date: Tue, 27 Feb 2001 09:48:14 -0600 (CST) Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <20010227141020.B9678@xs4all.nl> References: 
                       
                        <20010227141020.B9678@xs4all.nl> Message-ID: <15003.52286.800752.317549@beluga.mojam.com> Thomas> [pydoc] has my vote ;) Mine too. S From akuchlin@mems-exchange.org Tue Feb 27 15:59:32 2001 From: akuchlin@mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 10:59:32 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <200102270208.VAA01410@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 26, 2001 at 09:08:36PM -0500 References: 
                        
                         <200102270208.VAA01410@cj20424-a.reston1.va.home.com> Message-ID: <20010227105932.C17362@ute.cnri.reston.va.us> On Mon, Feb 26, 2001 at 09:08:36PM -0500, Guido van Rossum wrote: >You can check these into the regular tree. I guess they both go into >the Lib directory, right? Make sure pydoc.py is checked in with +x >permissions. I'll see if we can import pydoc.help into __builtin__ in >interactive mode. What about installing it as a script, into 
                         
                          /bin, so it's also available at the command line? The setup.py script could do it, or the Makefile could handle it. --amk From skip@mojam.com (Skip Montanaro) Tue Feb 27 16:00:12 2001 From: skip@mojam.com (Skip Montanaro) (Skip Montanaro) Date: Tue, 27 Feb 2001 10:00:12 -0600 (CST) Subject: [Python-Dev] editing FAQ? In-Reply-To: 
                          
                           References: <15002.48386.689975.913306@beluga.mojam.com> 
                           
                            Message-ID: <15003.53004.840361.997254@beluga.mojam.com> Tim> [Skip Montanaro] >> Seems like maybe the FAQ needs some touchup. Is it still under the >> control of the FAQ wizard (what's the password)? Tim> The password is Tim> Spam Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I barking up the wrong virtual tree? Skip From tim.one@home.com Tue Feb 27 16:23:23 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 11:23:23 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: <15003.53004.840361.997254@beluga.mojam.com> Message-ID: 
                            
                             [Skip Montanaro] > Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I barking up > the wrong virtual tree? Should work; agree it doesn't; have reported it to webmaster. From tim.one@home.com Tue Feb 27 16:46:21 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 11:46:21 -0500 Subject: [Python-Dev] A few small issues In-Reply-To: 
                             
                              Message-ID: 
                              
                              [Ka-Ping Yee] > Hi. Here are some things i noticed tonight. Ping (& everyone else), please submit bugs on SourceForge. Python-Dev is a black hole for this kind of thing: if nobody addresses your reports RIGHT NOW (unlikely in a release week), they'll be lost forever. From guido@digicool.com Tue Feb 27 05:04:28 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:04:28 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: Your message of "Tue, 27 Feb 2001 10:59:32 EST." <20010227105932.C17362@ute.cnri.reston.va.us> References: 
                              
                              <200102270208.VAA01410@cj20424-a.reston1.va.home.com> <20010227105932.C17362@ute.cnri.reston.va.us> Message-ID: <200102270504.AAA02105@cj20424-a.reston1.va.home.com> > On Mon, Feb 26, 2001 at 09:08:36PM -0500, Guido van Rossum wrote: > >You can check these into the regular tree. I guess they both go into > >the Lib directory, right? Make sure pydoc.py is checked in with +x > >permissions. I'll see if we can import pydoc.help into __builtin__ in > >interactive mode. > > What about installing it as a script, into 
                              
                              /bin, so it's also > available at the command line? The setup.py script could do it, or > the Makefile could handle it. Sounds like a good idea. (Maybe idle can also be installed if Tk is found.) Go for it. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@digicool.com Tue Feb 27 05:05:03 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:05:03 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: Your message of "Tue, 27 Feb 2001 10:00:12 CST." <15003.53004.840361.997254@beluga.mojam.com> References: <15002.48386.689975.913306@beluga.mojam.com> 
                              
                              <15003.53004.840361.997254@beluga.mojam.com> Message-ID: <200102270505.AAA02119@cj20424-a.reston1.va.home.com> > Tim> The password is > > Tim> Spam > > Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I barking up > the wrong virtual tree? Try again. I've rebooted the server. --Guido van Rossum (home page: http://www.python.org/~guido/) From skip@mojam.com (Skip Montanaro) Tue Feb 27 17:10:43 2001 From: skip@mojam.com (Skip Montanaro) (Skip Montanaro) Date: Tue, 27 Feb 2001 11:10:43 -0600 (CST) Subject: [Python-Dev] The more I think about __all__ ... Message-ID: <15003.57235.144454.826610@beluga.mojam.com> ... the more I think I should just yank out all those definitions. I've already been bitten by an incomplete __all__ list. I think the only people who can realistically create them are the authors of the modules. In addition, maintaining them is going to be as difficult as keeping any other piece of documentation up-to-date. Any other thoughts? BDFL - would you care to pronounce? Skip From skip@mojam.com (Skip Montanaro) Tue Feb 27 17:19:23 2001 From: skip@mojam.com (Skip Montanaro) (Skip Montanaro) Date: Tue, 27 Feb 2001 11:19:23 -0600 (CST) Subject: [Python-Dev] editing FAQ? In-Reply-To: <200102270505.AAA02119@cj20424-a.reston1.va.home.com> References: <15002.48386.689975.913306@beluga.mojam.com> 
                              
                              <15003.53004.840361.997254@beluga.mojam.com> <200102270505.AAA02119@cj20424-a.reston1.va.home.com> Message-ID: <15003.57755.361084.441490@beluga.mojam.com> >> Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I >> barking up the wrong virtual tree? Guido> Try again. I've rebooted the server. Okay, progress has been made. The above URL yielded a 404 error. Obviously I guessed the wrong URL for the faqwiz interface. I did eventually find it at http://www.python.org/cgi-bin/faqw.py Thanks, Skip From guido@digicool.com Tue Feb 27 05:31:02 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:31:02 -0500 Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: Your message of "Tue, 27 Feb 2001 11:10:43 CST." <15003.57235.144454.826610@beluga.mojam.com> References: <15003.57235.144454.826610@beluga.mojam.com> Message-ID: <200102270531.AAA02301@cj20424-a.reston1.va.home.com> > ... the more I think I should just yank out all those definitions. I've > already been bitten by an incomplete __all__ list. I think the only people > who can realistically create them are the authors of the modules. In > addition, maintaining them is going to be as difficult as keeping any other > piece of documentation up-to-date. > > Any other thoughts? BDFL - would you care to pronounce? I've always been lukewarm about the desire to add __all__ to every module under the sun. But i'm also lukewarm about ripping it all out now that it's done. So, no pronouncement from me unless I get more feedback on how harmful it's been so far. Sorry... --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy@alum.mit.edu Tue Feb 27 17:26:34 2001 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 12:26:34 -0500 (EST) Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: <200102270531.AAA02301@cj20424-a.reston1.va.home.com> References: <15003.57235.144454.826610@beluga.mojam.com> <200102270531.AAA02301@cj20424-a.reston1.va.home.com> Message-ID: <15003.58186.586724.972984@w221.z064000254.bwi-md.dsl.cnc.net> It seems to be to be a compatibility issue. If a module has an __all__, then from module import * may behave differently in Python 2.1 than it did in Python 2.0. The only problem of this sort I have encountered is with pickle, but I seldom use import *. The problem ends up being obscure to debug because you get a NameError. Then you hunt around in the middle and see that the name is never bound. Then you see that there is an import * -- and hopefully only one! Then you think, "Didn't Python grow __all__ enforcement in 2.1?" And you start hunting around for that name in the import module and check to see if it's in __all__. Jeremy From guido@digicool.com Tue Feb 27 05:48:05 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:48:05 -0500 Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: Your message of "Tue, 27 Feb 2001 12:26:34 EST." <15003.58186.586724.972984@w221.z064000254.bwi-md.dsl.cnc.net> References: <15003.57235.144454.826610@beluga.mojam.com> <200102270531.AAA02301@cj20424-a.reston1.va.home.com> <15003.58186.586724.972984@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102270548.AAA02442@cj20424-a.reston1.va.home.com> > It seems to be to be a compatibility issue. If a module has an > __all__, then from module import * may behave differently in Python > 2.1 than it did in Python 2.0. The only problem of this sort I have > encountered is with pickle, but I seldom use import *. This suggests a compatibility test that Skip can easily write. For each module that has an __all__ in 2.1, invoke python 2.0 to see what names are imported by import * for that module in 2.0, and see if there are differences. Then look carefully at the differences and see if they are acceptable. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one@home.com Tue Feb 27 18:56:24 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 13:56:24 -0500 Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: <200102270531.AAA02301@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Guido van Rossum] > ... > So, no pronouncement from me unless I get more feedback on how harmful > it's been so far. Sorry... Doesn't matter much to me. There are still spurious regrtest.py failures due to it under Windows when using -r; this has to do with that importing modules that don't exist on Windows leave behind incomplete module objects that fool test___all__.py. E.g., "regrtest test_pty test___all__" on Windows yields a bizarre failure. Tried fixing that last night, but it somehow caused test_sre(!) to fail instead, and I gave up for the night. From tim.one@home.com Tue Feb 27 19:27:12 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 14:27:12 -0500 Subject: [Python-Dev] Case-sensitive import Message-ID: 
                              
                              I'm still trying to sort this out. Some concerns and questions: I don't like the new MatchFilename, because it triggers on *all* platforms that #define HAVE_DIRENT_H. Anyone, doesn't that trigger on straight Linux systems too (all I know is that it's part of the Single UNIX Specification)? I don't like it because it implements a woefully inefficient algorithm: it cycles through the entire directory looking for a case-sensitive match. But there can be hundreds of .py files in a directory, and on average it will need to look at half of them, while if this triggers on straight Linux there's no need to look at *any* of them there. I also don't like it because it apparently triggers on Cygwin too but the code that calls it doesn't cater to that Cygwin possibly *should* be defining ALTSEP as well as SEP. Would rather dump MatchFilename and rewrite in terms of the old check_case (which should run much quicker, and already comes in several appropriate platform-aware versions -- and I clearly minimize the chance of breakage if I stick to that time-tested code). Steven, there is a "#ifdef macintosh" version of check_case already. Will that or won't that work correctly on your variant of Mac? If not, would you please supply a version that does (along with the #ifdef'ery needed to recognize your Mac variant)? Jason, I *assume* that the existing "#if defined(MS_WIN32) || defined(__CYGWIN__)" version of check_case works already for you. Scream if that's wrong. Steven and Jack, does getenv() work on both your flavors of Mac? I want to make PYTHONCASEOK work for you too. From tim.one@home.com Tue Feb 27 19:34:28 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 14:34:28 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: 
                              
                              Message-ID: 
                              
                              http://www.python.org/cgi-bin/faqw.py is working again. Password is Spam. The http://www.python.org/cgi-bin/faqwiz you mentioned now yields a 404 (File Not Found). > [Skip Montanaro] >> Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I >> barking up the wrong virtual tree? > > Should work; agree it doesn't; have reported it to webmaster. > From akuchlin@mems-exchange.org Tue Feb 27 19:50:44 2001 From: akuchlin@mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 14:50:44 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <200102270504.AAA02105@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Tue, Feb 27, 2001 at 12:04:28AM -0500 References: 
                              
                              <200102270208.VAA01410@cj20424-a.reston1.va.home.com> <20010227105932.C17362@ute.cnri.reston.va.us> <200102270504.AAA02105@cj20424-a.reston1.va.home.com> Message-ID: <20010227145044.B29979@ute.cnri.reston.va.us> On Tue, Feb 27, 2001 at 12:04:28AM -0500, Guido van Rossum wrote: >Sounds like a good idea. (Maybe idle can also be installed if Tk is >found.) Go for it. Will do. Is there anything else in Tools/ or Lib/ that could be usefully installed, such as tabnanny or whatever? I can't think of anything that would be really burningly important, so I'll just take care of pydoc. Re: IDLE: Martin already contributed a Tools/idle/setup.py, but I'm not sure how to trigger it recursively. Perhaps a configure option --install-idle, which controls an idleinstall target in the Makefile. Making it only install if Tkinter is compiled seems icky; I don't see how to do that cleanly. Martin, any suggestions? --amk From guido@digicool.com Tue Feb 27 08:08:13 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 03:08:13 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: Your message of "Tue, 27 Feb 2001 14:50:44 EST." <20010227145044.B29979@ute.cnri.reston.va.us> References: 
                              
                              <200102270208.VAA01410@cj20424-a.reston1.va.home.com> <20010227105932.C17362@ute.cnri.reston.va.us> <200102270504.AAA02105@cj20424-a.reston1.va.home.com> <20010227145044.B29979@ute.cnri.reston.va.us> Message-ID: <200102270808.DAA16485@cj20424-a.reston1.va.home.com> > On Tue, Feb 27, 2001 at 12:04:28AM -0500, Guido van Rossum wrote: > >Sounds like a good idea. (Maybe idle can also be installed if Tk is > >found.) Go for it. > > Will do. Is there anything else in Tools/ or Lib/ that could be > usefully installed, such as tabnanny or whatever? I can't think of > anything that would be really burningly important, so I'll just take > care of pydoc. Offhand, not -- idle and pydoc seem to be overwhelmingly more important to me than anything else... > Re: IDLE: Martin already contributed a Tools/idle/setup.py, but I'm > not sure how to trigger it recursively. Perhaps a configure option > --install-idle, which controls an idleinstall target in the Makefile. > Making it only install if Tkinter is compiled seems icky; I don't see > how to do that cleanly. Martin, any suggestions? I have to admit that I don't know what IDLE's setup.py does... :-( --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin@mems-exchange.org Tue Feb 27 20:55:45 2001 From: akuchlin@mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 15:55:45 -0500 Subject: [Python-Dev] Patch uploads broken Message-ID: 
                              
                              Uploading of patches seems to be broken on SourceForge at the moment; even if you fill in the file upload form, its contents seem to just be ignored. Reported to SF as support req #404688: http://sourceforge.net/tracker/?func=detail&aid=404688&group_id=1&atid=200001 --amk From tim.one@home.com Tue Feb 27 21:15:53 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 16:15:53 -0500 Subject: [Python-Dev] New test_inspect fails under -O Message-ID: 
                              
                              I assume this is a x-platform failure. Don't have time to look into it myself right now. C:\Code\python\dist\src\PCbuild>python -O ../lib/test/test_inspect.py Traceback (most recent call last): File "../lib/test/test_inspect.py", line 172, in ? 'trace() row 1') File "../lib/test/test_inspect.py", line 70, in test raise TestFailed, message % args test_support.TestFailed: trace() row 1 C:\Code\python\dist\src\PCbuild> From jeremy@alum.mit.edu Tue Feb 27 21:38:27 2001 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 16:38:27 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... Message-ID: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> > In addition, all future_statments must appear near the top of the > module. The only lines that can appear before a future_statement are: > > + The module docstring (if any). > + Comments. > + Blank lines. > + Other future_statements. I would like to add another restriction: A future_statement must appear on a line by itself. It is not legal to combine a future_statement without any other statement using a semicolon. It would be a bear to implement error handling for cases like this: from __future__ import a; import b; from __future__ import c Jeremy From Samuele Pedroni 
                              
                              Tue Feb 27 21:54:43 2001 From: Samuele Pedroni 
                              
                              (Samuele Pedroni) Date: Tue, 27 Feb 2001 22:54:43 +0100 (MET) Subject: [Python-Dev] one more restriction for from __future__ import ... Message-ID: <200102272154.WAA25543@core.inf.ethz.ch> Hi. > > In addition, all future_statments must appear near the top of the > > module. The only lines that can appear before a future_statement are: > > > > + The module docstring (if any). > > + Comments. > > + Blank lines. > > + Other future_statements. > > I would like to add another restriction: > > A future_statement must appear on a line by itself. It is not > legal to combine a future_statement without any other statement > using a semicolon. > > It would be a bear to implement error handling for cases like this: > > from __future__ import a; import b; from __future__ import c Will the error be unclear for the user or there's another problem? In jython I get from parser an abstract syntax tree, so it is difficult to distringuish the ; from true newlines ;) regards, Samuele From guido@digicool.com Tue Feb 27 10:06:18 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 05:06:18 -0500 Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: Your message of "Tue, 27 Feb 2001 16:38:27 EST." <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102271006.FAA18760@cj20424-a.reston1.va.home.com> > I would like to add another restriction: > > A future_statement must appear on a line by itself. It is not > legal to combine a future_statement without any other statement > using a semicolon. > > It would be a bear to implement error handling for cases like this: > > from __future__ import a; import b; from __future__ import c Really?!? Why? Isn't it straightforward to check that everything you encounter in a left-to-right leaf scan of the parse tree is either a future statement or a docstring until you encounter a non-future? --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin@mems-exchange.org Tue Feb 27 22:34:06 2001 From: akuchlin@mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 17:34:06 -0500 Subject: [Python-Dev] Re: Patch uploads broken Message-ID: 
                              
                              The SourceForge admins couldn't replicate the patch upload problem, and managed to attach a file to the Python bug report in question, yet when I try it, it still fails for me. So, a question for this list: has uploading patches or other files been working for you recently, particularly today? Maybe with more data, we can see a pattern (browser version, SSL/non-SSL, cluefulness of user, ...). If you want to try it, feel free to try attaching a file to bug #404680: https://sourceforge.net/tracker/index.php?func=detail&aid=404680&group_id=5470&atid=305470 ) The SF admin request for this problem is at http://sourceforge.net/tracker/?func=detail&atid=100001&aid=404688&group_id=1, but it's better if I collect the results and summarize them in a single comment. --amk From michel@digicool.com Tue Feb 27 22:58:56 2001 From: michel@digicool.com (Michel Pelletier) Date: Tue, 27 Feb 2001 14:58:56 -0800 (PST) Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: 
                              
                              Message-ID: 
                              
                              Andrew, FYI, we have seen the same problem on the SF zope-book patch tracker. I have a user who can reproduce it, like you. Would you like me to get you more info? -Michel On Tue, 27 Feb 2001, Andrew Kuchling wrote: > The SourceForge admins couldn't replicate the patch upload problem, > and managed to attach a file to the Python bug report in question, yet > when I try it, it still fails for me. So, a question for this list: > has uploading patches or other files been working for you recently, > particularly today? Maybe with more data, we can see a pattern > (browser version, SSL/non-SSL, cluefulness of user, ...). > > If you want to try it, feel free to try attaching a file to bug #404680: > https://sourceforge.net/tracker/index.php?func=detail&aid=404680&group_id=5470&atid=305470 > ) > > The SF admin request for this problem is at > http://sourceforge.net/tracker/?func=detail&atid=100001&aid=404688&group_id=1, > but it's better if I collect the results and summarize them in a > single comment. > > --amk > > > _______________________________________________ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > From tim.one@home.com Tue Feb 27 23:06:59 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 18:06:59 -0500 Subject: [Python-Dev] More std test breakage Message-ID: 
                              
                              test_inspect.py still failing under -O; probably all platforms. New failure in test___all__.py, *possibly* specific to Windows, but I don't see any "termios.py" anywhere so hard to believe it could be working anywhere else either: C:\Code\python\dist\src\PCbuild>python ../lib/test/test___all__.py Traceback (most recent call last): File "../lib/test/test___all__.py", line 78, in ? check_all("getpass") File "../lib/test/test___all__.py", line 10, in check_all exec "import %s" % modname in names File "
                              
                              ", line 1, in ? File "c:\code\python\dist\src\lib\getpass.py", line 106, in ? import termios NameError: Case mismatch for module name termios (filename c:\code\python\dist\src\lib\TERMIOS.py) C:\Code\python\dist\src\PCbuild> From tommy@ilm.com Tue Feb 27 23:22:16 2001 From: tommy@ilm.com (Flying Cougar Burnette) Date: Tue, 27 Feb 2001 15:22:16 -0800 (PST) Subject: [Python-Dev] to whoever made the termios changes... Message-ID: <15004.13862.351574.668648@mace.lucasdigital.com> I've already deleted the check-in mail and forgot who it was! Hopefully you're listening... (Fred, maybe?) I just did a cvs update and am no getting this when compiling on irix65: cc -O -OPT:Olimit=0 -I. -I/usr/u0/tommy/pycvs/python/dist/src/./Include -IInclude/ -I/usr/local/include -c /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c -o build/temp.irix-6.5-2.1/termios.o cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 320 The identifier "B230400" is undefined. {"B230400", B230400}, ^ cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 321 The identifier "CBAUDEX" is undefined. {"CBAUDEX", CBAUDEX}, ^ cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 399 The identifier "CRTSCTS" is undefined. {"CRTSCTS", CRTSCTS}, ^ cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 432 The identifier "VSWTC" is undefined. {"VSWTC", VSWTC}, ^ 4 errors detected in the compilation of "/usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c". time for an #ifdef? From jeremy@alum.mit.edu Tue Feb 27 23:27:30 2001 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 18:27:30 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <200102271006.FAA18760@cj20424-a.reston1.va.home.com> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> Message-ID: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum 
                              
                              writes: >> I would like to add another restriction: >> >> A future_statement must appear on a line by itself. It is not >> legal to combine a future_statement without any other statement >> using a semicolon. >> >> It would be a bear to implement error handling for cases like >> this: >> >> from __future__ import a; import b; from __future__ import c GvR> Really?!? Why? Isn't it straightforward to check that GvR> everything you encounter in a left-to-right leaf scan of the GvR> parse tree is either a future statement or a docstring until GvR> you encounter a non-future? It's not hard to find legal future statements. It's hard to find illegal ones. The pass to find future statements exits as soon as it finds something that isn't a doc string or a future. The symbol table pass detects illegal future statements by comparing the current line number against the line number of the last legal futre statement. If a mixture of legal and illegal future statements occurs on the same line, that test fails. If I want to be more precise, I can think of a couple of ways to figure out if a particular future statement occurs after the first non-import statement. Neither is particularly pretty because the parse tree is so deep by the time you get to the import statement. One possibility is to record the index of each small_stmt that occurs as a child of a simple_stmt in the symbol table. The future statement pass can record the offset of the first non-legal small_stmt when it occurs as part of an extend simple_stmt. The symbol table would also need to record the current index of each small_stmt. To implement this, I've got to touch a lot of code. The other possibility is to record the address for the first statement following the last legal future statement. The symbol table pass could test each node it visits and set a flag when this node is visited a second time. Any future statement found when the flag is set is an error. I'm concerned that it will be difficult to guarantee that this node is always checked, because the loop that walks the tree frequently dispatches to helper functions. I think each helper function would need to test. Do you have any other ideas? I haven't though about this for more than 20 minutes and was hoping to avoid more time invested on the matter. If it's a problem for Jython, though, we'll need to figure something out. Perhaps the effect of multiple future statements on a single line could be undefined, which would allow Python to raise an error and Jython to ignore the error. Not ideal, but expedient. Jeremy From ping@lfw.org Tue Feb 27 23:34:17 2001 From: ping@lfw.org (Ka-Ping Yee) Date: Tue, 27 Feb 2001 15:34:17 -0800 (PST) Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              On Mon, 26 Feb 2001, Guido van Rossum wrote: > > No, it's being accepted because your name is Ping. Hooray! Thank you, Guido. :) > Now let's paaaartaaaay! You said it, brother. Welcome to the Year of the Snake. -- ?!ng From skip@mojam.com (Skip Montanaro) Tue Feb 27 23:39:02 2001 From: skip@mojam.com (Skip Montanaro) (Skip Montanaro) Date: Tue, 27 Feb 2001 17:39:02 -0600 (CST) Subject: [Python-Dev] More std test breakage In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <15004.14998.720791.657513@beluga.mojam.com> Tim> test_inspect.py still failing under -O; probably all platforms. Tim> New failure in test___all__.py, *possibly* specific to Windows, but Tim> I don't see any "termios.py" anywhere so hard to believe it could Tim> be working anywhere else either: ... NameError: Case mismatch for module name termios (filename c:\code\python\dist\src\lib\TERMIOS.py) Try cvs update. Lib/getpass.py shouldn't be trying to import TERMIOS anymore. The case mismatch you're seeing is because it can find the now defunct TERMIOS.py module but you obviously don't have the termios module. Skip From skip@mojam.com (Skip Montanaro) Tue Feb 27 23:48:04 2001 From: skip@mojam.com (Skip Montanaro) (Skip Montanaro) Date: Tue, 27 Feb 2001 17:48:04 -0600 (CST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15004.15540.643665.504819@beluga.mojam.com> Jeremy> The symbol table pass detects illegal future statements by Jeremy> comparing the current line number against the line number of the Jeremy> last legal futre statement. Why not just add a flag (default false at the start of the compilation) to the compiling struct that tells you if you've seen a future-killer statement already? Then if you see a future statement the compiler can whine. Skip From skip@mojam.com (Skip Montanaro) Tue Feb 27 23:56:47 2001 From: skip@mojam.com (Skip Montanaro) (Skip Montanaro) Date: Tue, 27 Feb 2001 17:56:47 -0600 (CST) Subject: [Python-Dev] test_symtable failing on Linux Message-ID: <15004.16063.325105.836576@beluga.mojam.com> test_symtable is failing for me: % ./python ../Lib/test/test_symtable.py Traceback (most recent call last): File "../Lib/test/test_symtable.py", line 7, in ? verify(symbols[0].name == "global") TypeError: unsubscriptable object Just cvs up'd about ten minutes ago. Skip From jeremy@alum.mit.edu Tue Feb 27 23:50:30 2001 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 18:50:30 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <15004.15540.643665.504819@beluga.mojam.com> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> <15004.15540.643665.504819@beluga.mojam.com> Message-ID: <15004.15686.104843.418585@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro 
                              
                              writes: Jeremy> The symbol table pass detects illegal future statements by Jeremy> comparing the current line number against the line number of Jeremy> the last legal futre statement. SM> Why not just add a flag (default false at the start of the SM> compilation) to the compiling struct that tells you if you've SM> seen a future-killer statement already? Then if you see a SM> future statement the compiler can whine. Almost everything is a future-killer statement, only doc strings and other future statements are allowed. I would have to add a st->st_future_killed = 1 for almost every node type. There are also a number of nodes (about ten) that can contain future statements or doc strings or future killers. As a result, I'd have to add special cases for them, too. Jeremy From jeremy@alum.mit.edu Tue Feb 27 23:51:37 2001 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 18:51:37 -0500 (EST) Subject: [Python-Dev] test_symtable failing on Linux In-Reply-To: <15004.16063.325105.836576@beluga.mojam.com> References: <15004.16063.325105.836576@beluga.mojam.com> Message-ID: <15004.15753.795849.695997@w221.z064000254.bwi-md.dsl.cnc.net> This is a problem I don't know how to resolve; perhaps Andrew or Neil can. _symtablemodule.so depends on symtable.h, but setup.py doesn't know that. If you rebuild the .so, it should work. third-person-to-hit-this-problem-ly y'rs, Jeremy From greg@cosc.canterbury.ac.nz Wed Feb 28 00:01:53 2001 From: greg@cosc.canterbury.ac.nz (Greg Ewing) Date: Wed, 28 Feb 2001 13:01:53 +1300 (NZDT) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102280001.NAA02075@s454.cosc.canterbury.ac.nz> > The pass to find future statements exits as soon as it > finds something that isn't a doc string or a future. Well, don't do that, then. Have the find_future_statements pass keep going and look for *illegal* future statements as well. Then, subsequent passes can just ignore any import that looks like a future statement, because it will already have been either processed or reported as an error. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+ From sdm7g@virginia.edu Wed Feb 28 00:03:56 2001 From: sdm7g@virginia.edu (Steven D. Majewski) Date: Tue, 27 Feb 2001 19:03:56 -0500 (EST) Subject: [Python-Dev] Case-sensitive import In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Tue, 27 Feb 2001, Tim Peters wrote: > I don't like the new MatchFilename, because it triggers on *all* platforms > that #define HAVE_DIRENT_H. I mentioned this when I originally submitted the patch. The intent was that it be *able* to compile on any unix-like platform -- in particular, I was thinking LinuxPPC was the other case I could think of where someone might want to use a HFS+ filesystem - but that Darwin/MacOSX was likely to be the only system in which that was the default. > Anyone, doesn't that trigger on straight Linux systems too (all I know is > that it's part of the Single UNIX Specification)? Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a difference in the linux dirent structs. ( I'm not sure if he caught my initial statement of intent either, but then the discussion veered into whether the patch should have been accepted at all, and then into the discussion of a general solution... ) I'm not happy with the ineffeciency either, but, as I noted, I didn't expect that it would be enabled by default elsewhere when I submitted it. ( And my goal for OSX was just to have a version that builds and doesn't crash much, so searching for a more effecient solution was going to be the next project. ) > Would rather dump MatchFilename and rewrite in terms of the old check_case > (which should run much quicker, and already comes in several appropriate > platform-aware versions -- and I clearly minimize the chance of breakage if I > stick to that time-tested code). The reason I started from scratch, you might recall, was that before I understood that the Windows semantics was intended to be different, I tried adding a Mac version of check_case, and it didn't do what I wanted. But that wasn't a problem with any of the existing check_case functions, but was due to how check_case was used. > Steven, there is a "#ifdef macintosh" version of check_case already. Will > that or won't that work correctly on your variant of Mac? If not, would you > please supply a version that does (along with the #ifdef'ery needed to > recognize your Mac variant)? One problem is that I'm aiming for a version that would work on both the open source Darwin distribution ( which is mach + BSD + some Apple extensions: Objective-C, CoreFoundation, Frameworks, ... but not most of the macosx Carbon and Cocoa libraries. ) and the full MacOSX. Thus the reason for a unix only implementation -- the info may be more easily available via MacOS FSSpec's but that's not available on vanilla Darwin. ( And I can't, for the life of me, thing of an effecient unix implementation -- UNIX file system API's + HFS+ filesystem semantics may be an unfortunate mixture! ) In other words: I can rename the current version to check_case and fix the args to match. (Although, I recall that the args to check_case were rather more awkward to handle, but I'll have to look again. ) It also probably shouldn't be "#ifdef macintosh" either, but that's a thread in itself! > Steven and Jack, does getenv() work on both your flavors of Mac? I want to > make PYTHONCASEOK work for you too. getenv() works on OSX (it's the BSD unix implementation). ( I *think* that Jack has the MacPython get the variables from Pythoprefs file settings. ) -- Steve Majewski From guido@digicool.com Tue Feb 27 12:12:18 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 07:12:18 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: Your message of "Tue, 27 Feb 2001 17:34:06 EST." 
                              
                              References: 
                              
                              Message-ID: <200102271212.HAA19298@cj20424-a.reston1.va.home.com> > If you want to try it, feel free to try attaching a file to bug #404680: > https://sourceforge.net/tracker/index.php?func=detail&aid=404680&group_id=5470&atid=305470 > ) > > The SF admin request for this problem is at > http://sourceforge.net/tracker/?func=detail&atid=100001&aid=404688&group_id=1, > but it's better if I collect the results and summarize them in a > single comment. My conclusion: the file upload is refused iff the comment is empty -- in other words the complaint about an empty comment is coded wrongly and should only occur when the comment is empty *and* no file is uploaded. Or maybe they want you to add a comment with your file -- that's fine too, but the error isn't very clear. http or https made no difference. I used NS 4.72 on Linux; Tim used IE and had the same results. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one@home.com Wed Feb 28 00:06:55 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 19:06:55 -0500 Subject: [Python-Dev] More std test breakage In-Reply-To: <15004.14998.720791.657513@beluga.mojam.com> Message-ID: 
                              
                              > Try cvs update. Already had. > Lib/getpass.py shouldn't be trying to import TERMIOS anymore. It isn't. It's trying to import (lowercase) termios. > The case mismatch you're seeing is because it can find the now defunct > TERMIOS.py module but you obviously don't have the termios module. Indeed I do not. Ah. But it *used* to import (uppercase) TERMIOS. That makes this a Windows thing: when it tries to import termios, it still *finds* TERMIOS.py, and on Windows that raises a NameError (instead of the ImportError you'd hope to get, if you *had* to get any error at all out of mismatching case). So this should go away, and get replaced by an ImportError, when I check in the "case-sensitive import" patch for Windows. Thanks for the nudge! From guido@digicool.com Tue Feb 27 12:21:11 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 07:21:11 -0500 Subject: [Python-Dev] More std test breakage In-Reply-To: Your message of "Tue, 27 Feb 2001 18:06:59 EST." 
                              
                              References: 
                              
                              Message-ID: <200102271221.HAA19394@cj20424-a.reston1.va.home.com> > New failure in test___all__.py, *possibly* specific to Windows, but I don't > see any "termios.py" anywhere so hard to believe it could be working anywhere > else either: > > C:\Code\python\dist\src\PCbuild>python ../lib/test/test___all__.py > Traceback (most recent call last): > File "../lib/test/test___all__.py", line 78, in ? > check_all("getpass") > File "../lib/test/test___all__.py", line 10, in check_all > exec "import %s" % modname in names > File "
                              
                              ", line 1, in ? > File "c:\code\python\dist\src\lib\getpass.py", line 106, in ? > import termios > NameError: Case mismatch for module name termios > (filename c:\code\python\dist\src\lib\TERMIOS.py) > > C:\Code\python\dist\src\PCbuild> Easy. There used to be a built-in termios on Unix only, and 12 different platform-specific copies of TERMIOS.py, on Unix only. We're phasing TERMIOS.py out, mocing all the symbols into termios, and as part of that we chose to remove all the platform-dependent TERMIOS.py files with a single one, in Lib, that imports the symbols from termios, for b/w compatibility. But the code that tries to see if termios exists only catches ImportError, not NameError. You can add NameError to the except clause in getpass.py, or you can proceed with your fix to the case-sensitive imports. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy@alum.mit.edu Wed Feb 28 00:13:42 2001 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 19:13:42 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <200102280001.NAA02075@s454.cosc.canterbury.ac.nz> References: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> <200102280001.NAA02075@s454.cosc.canterbury.ac.nz> Message-ID: <15004.17078.793539.226783@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GE" == Greg Ewing 
                              
                              writes: >> The pass to find future statements exits as soon as it finds >> something that isn't a doc string or a future. GE> Well, don't do that, then. Have the find_future_statements pass GE> keep going and look for *illegal* future statements as well. GE> Then, subsequent passes can just ignore any import that looks GE> like a future statement, because it will already have been GE> either processed or reported as an error. I like this idea best so far. Jeremy From guido@digicool.com Wed Feb 28 00:24:47 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:24:47 -0500 Subject: [Python-Dev] to whoever made the termios changes... In-Reply-To: Your message of "Tue, 27 Feb 2001 15:22:16 PST." <15004.13862.351574.668648@mace.lucasdigital.com> References: <15004.13862.351574.668648@mace.lucasdigital.com> Message-ID: <200102280024.TAA19492@cj20424-a.reston1.va.home.com> > I've already deleted the check-in mail and forgot who it was! > Hopefully you're listening... (Fred, maybe?) Yes, Fred. > I just did a cvs update and am no getting this when compiling on > irix65: > > cc -O -OPT:Olimit=0 -I. -I/usr/u0/tommy/pycvs/python/dist/src/./Include -IInclude/ -I/usr/local/include -c /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c -o build/temp.irix-6.5-2.1/termios.o > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 320 > The identifier "B230400" is undefined. > > {"B230400", B230400}, > ^ > > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 321 > The identifier "CBAUDEX" is undefined. > > {"CBAUDEX", CBAUDEX}, > ^ > > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 399 > The identifier "CRTSCTS" is undefined. > > {"CRTSCTS", CRTSCTS}, > ^ > > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 432 > The identifier "VSWTC" is undefined. > > {"VSWTC", VSWTC}, > ^ > > 4 errors detected in the compilation of "/usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c". > > > > time for an #ifdef? Definitely. At least these 4; maybe for every stupid symbol we're adding... --Guido van Rossum (home page: http://www.python.org/~guido/) From guido@digicool.com Wed Feb 28 00:29:44 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:29:44 -0500 Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: Your message of "Tue, 27 Feb 2001 18:27:30 EST." <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102280029.TAA19538@cj20424-a.reston1.va.home.com> > >> It would be a bear to implement error handling for cases like > >> this: > >> > >> from __future__ import a; import b; from __future__ import c > > GvR> Really?!? Why? Isn't it straightforward to check that > GvR> everything you encounter in a left-to-right leaf scan of the > GvR> parse tree is either a future statement or a docstring until > GvR> you encounter a non-future? > > It's not hard to find legal future statements. It's hard to find > illegal ones. The pass to find future statements exits as soon as it > finds something that isn't a doc string or a future. The symbol table > pass detects illegal future statements by comparing the current line > number against the line number of the last legal futre statement. Aha. That's what I missed -- comparison by line number. One thing you could do would simply be check the entire current simple_statement, which would catch the above example; the possibilities are limited at that level (no blocks can start on the same line after an import). --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one@home.com Wed Feb 28 00:34:32 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 19:34:32 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Steven D. Majewski] > ... > The intent was that it be *able* to compile on any unix-like platform -- > in particular, I was thinking LinuxPPC was the other case I could > think of where someone might want to use a HFS+ filesystem - but > that Darwin/MacOSX was likely to be the only system in which that was > the default. I don't care about LinuxPPC right now. When someone steps up to champion that platform, they can deal with it then. What I am interested in is supporting the platforms we *do* have warm bodies looking at, and not regressing on any of them. I'm surprised nobody on Linux already screamed. >> Anyone, doesn't that trigger on straight Linux systems too (all I know is >> that it's part of the Single UNIX Specification)? > Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a difference in > the linux dirent structs. ( I'm not sure if he caught my initial > statement of intent either, but then the discussion veered into whether > the patch should have been accepted at all, and then into the discussion > of a general solution... ) > > I'm not happy with the ineffeciency either, but, as I noted, I didn't > expect that it would be enabled by default elsewhere when I submitted > it. I expect it's enabled everywhere the #ifdef's in the patch enabled it 
                              
                              . But I don't care about the past either, I want to straighten it out *now*. > ( And my goal for OSX was just to have a version that builds and > doesn't crash much, so searching for a more effecient solution was > going to be the next project. ) Then this is the right time. Play along by pretending that OSX is the special case that it is <0.9 wink>. > ... > The reason I started from scratch, you might recall, was that before I > understood that the Windows semantics was intended to be different, I > tried adding a Mac version of check_case, and it didn't do what I wanted. > But that wasn't a problem with any of the existing check_case functions, > but was due to how check_case was used. check_case will be used differently now. > ... > One problem is that I'm aiming for a version that would work on both > the open source Darwin distribution ( which is mach + BSD + some Apple > extensions: Objective-C, CoreFoundation, Frameworks, ... but not most > of the macosx Carbon and Cocoa libraries. ) and the full MacOSX. > Thus the reason for a unix only implementation -- the info may be > more easily available via MacOS FSSpec's but that's not available > on vanilla Darwin. ( And I can't, for the life of me, thing of an > effecient unix implementation -- UNIX file system API's + HFS+ filesystem > semantics may be an unfortunate mixture! ) Please just solve the problem for the platforms you're actually running on -- case-insensitive filesystems are not "Unix only" in any meaningful sense of that phrase, and each not-really-Unix platform is likely to have its own stupid gimmicks for worming around this problem anyway. For example, Cygwin defers to the Windows API. Great! That solves the problem there. Generalization is premature. > In other words: I can rename the current version to check_case and > fix the args to match. (Although, I recall that the args to check_case > were rather more awkward to handle, but I'll have to look again. ) Good! I'm not going to wait for that, though. I desperately need a nap, but when I get up I'll check in changes that should be sufficient for the Windows and Cygwin parts of this, without regressing on other platforms. We'll then have to figure out whatever #ifdef'ery is needed for your platform(s). > getenv() works on OSX (it's the BSD unix implementation). So it's *kind* of like Unix after all 
                              
                              . > ( I *think* that Jack has the MacPython get the variables from Pythoprefs > file settings. ) Haven't heard from him, but getenv() is used freely in the Python codebase elsewhere, so I figure he's got *some* way to fake it. So I'm not worried about that anymore (until Jack screams about it). From guido@digicool.com Wed Feb 28 00:35:07 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:35:07 -0500 Subject: [Python-Dev] test_symtable failing on Linux In-Reply-To: Your message of "Tue, 27 Feb 2001 18:51:37 EST." <15004.15753.795849.695997@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.16063.325105.836576@beluga.mojam.com> <15004.15753.795849.695997@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102280035.TAA19590@cj20424-a.reston1.va.home.com> > This is a problem I don't know how to resolve; perhaps Andrew or Neil > can. _symtablemodule.so depends on symtable.h, but setup.py doesn't > know that. If you rebuild the .so, it should work. Mayby this module shouldn't be built by setup.py; it could be added to Modules/Setup.dist (all the mechanism there still works, it just isn't used for most modules; but some are still there: posix, _sre). Then you can add a regular dependency for it to the regular Makefile. This is a weakness in general of setup.py, but rarely causes a problem because the standard Python headers are pretty stable. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one@home.com Wed Feb 28 00:38:15 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 19:38:15 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: <200102271212.HAA19298@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Guido] > My conclusion: the file upload is refused iff the comment is empty -- > in other words the complaint about an empty comment is coded wrongly > and should only occur when the comment is empty *and* no file is > uploaded. Or maybe they want you to add a comment with your file -- > that's fine too, but the error isn't very clear. > > http or https made no difference. I used NS 4.72 on Linux; Tim used > IE and had the same results. BTW, this may be more pervasive: I recall that in the wee hours, I kept getting "ERROR: nothing changed" rejections when I was just trying to clean up old reports via doing nothing but changing the assigned-to (for example) dropdown list value. Perhaps they want a comment with every change of any kind now? From guido@digicool.com Wed Feb 28 00:46:14 2001 From: guido@digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:46:14 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: Your message of "Tue, 27 Feb 2001 19:38:15 EST." 
                              
                              References: 
                              
                              Message-ID: <200102280046.TAA19712@cj20424-a.reston1.va.home.com> > BTW, this may be more pervasive: I recall that in the wee hours, I kept > getting "ERROR: nothing changed" rejections when I was just trying to clean > up old reports via doing nothing but changing the assigned-to (for example) > dropdown list value. Perhaps they want a comment with every change of any > kind now? Which in itself is not a bad policy. But the error sucks. --Guido van Rossum (home page: http://www.python.org/~guido/) From sdm7g@virginia.edu Wed Feb 28 01:59:56 2001 From: sdm7g@virginia.edu (Steven D. Majewski) Date: Tue, 27 Feb 2001 20:59:56 -0500 (EST) Subject: [Python-Dev] Case-sensitive import In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Tue, 27 Feb 2001, Tim Peters wrote: > Please just solve the problem for the platforms you're actually running on -- > case-insensitive filesystems are not "Unix only" in any meaningful sense of > that phrase, and each not-really-Unix platform is likely to have its own > stupid gimmicks for worming around this problem anyway. For example, Cygwin > defers to the Windows API. Great! That solves the problem there. > Generalization is premature. This isn't an attempt at abstract theorizing: I'm running Darwin with and without MacOSX on top, as well as MkLinux, LinuxPPC, and of course, various versions of "Classic" MacOS on various machines. I would gladly drop the others for MacOSX, but OSX won't run on all of the older machines. I'm hoping those machines will get replaced before I actually have to support all of those flavors, so I'm not trying to bend over backwards to be portable, but I'm also trying not to shoot myself in the foot by being overly un-general! It's not, for me, being any more premature than you wondering if the VMS users will scream at the changes. ( Although, in both cases, I think it's reasonable to say: "I thought about it -- now here's what we're going to do anyway!" I suspect that folks running Darwin on Intel are using UFS and don't want the overhead either, but I'm not even trying to generalize to them yet! ) > > In other words: I can rename the current version to check_case and > > fix the args to match. (Although, I recall that the args to check_case > > were rather more awkward to handle, but I'll have to look again. ) > > Good! I'm not going to wait for that, though. I desperately need a nap, but > when I get up I'll check in changes that should be sufficient for the Windows > and Cygwin parts of this, without regressing on other platforms. We'll then > have to figure out whatever #ifdef'ery is needed for your platform(s). __MACH__ is predefined, meaning mach system calls are supported, and __APPLE__ is predefined -- I think it means it's Apple's compiler. So: #if defined(__MACH__) && defined(__APPLE__) ought to uniquely identify Darwin, at least until Apple does another OS. ( Maybe it would be cleaner to have config add -DDarwin switches -- or if you want to get general -D$MACHDEP -- except that I don't think all the values of MACHDEP will parse as symbols. ) -- Steve Majewski From sdm7g@virginia.edu Wed Feb 28 02:16:36 2001 From: sdm7g@virginia.edu (Steven D. Majewski) Date: Tue, 27 Feb 2001 21:16:36 -0500 (EST) Subject: [Python-Dev] Case-sensitive import In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Tue, 27 Feb 2001, Tim Peters wrote: > > check_case will be used differently now. > If check_case will be used differently, then why not just use "#ifdef CHECK_IMPORT_CASE" as the switch? -- Steve Majewski From Jason.Tishler@dothill.com Wed Feb 28 03:32:16 2001 From: Jason.Tishler@dothill.com (Jason Tishler) Date: Tue, 27 Feb 2001 22:32:16 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: 
                              
                              ; from tim.one@home.com on Tue, Feb 27, 2001 at 02:27:12PM -0500 References: 
                              
                              Message-ID: <20010227223216.C252@dothill.com> Tim, On Tue, Feb 27, 2001 at 02:27:12PM -0500, Tim Peters wrote: > Jason, I *assume* that the existing "#if defined(MS_WIN32) || > defined(__CYGWIN__)" version of check_case works already for you. Scream if > that's wrong. I guess it depends on what you mean by "works." When I submitted my patch to enable case-sensitive imports for Cygwin, I mistakenly thought that I was solving import problems such as "import TERMIOS, termios". Unfortunately, I was only enabling the (old) Win32 "Case mismatch for module name foo" code for Cygwin too. Subsequently, there have been changes to Cygwin gcc that may make it difficult (i.e., require non-standard -I options) to find Win32 header files like "windows.h". So from an ease of building point of view, it would be better to stick with POSIX calls and avoid direct Win32 ones. Unfortunately, from an efficiency point of view, it sounds like this is unavoidable. I would like to test your patch with both Cygwin gcc 2.95.2-6 (i.e., Win32 friendly) and 2.95.2-7 (i.e., Unix bigot). Please let me know when it's ready. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler@dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From Jason.Tishler@dothill.com Wed Feb 28 04:01:51 2001 From: Jason.Tishler@dothill.com (Jason Tishler) Date: Tue, 27 Feb 2001 23:01:51 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: 
                              
                              ; from akuchlin@mems-exchange.org on Tue, Feb 27, 2001 at 05:34:06PM -0500 References: 
                              
                              Message-ID: <20010227230151.D252@dothill.com> On Tue, Feb 27, 2001 at 05:34:06PM -0500, Andrew Kuchling wrote: > The SourceForge admins couldn't replicate the patch upload problem, > and managed to attach a file to the Python bug report in question, yet > when I try it, it still fails for me. So, a question for this list: > has uploading patches or other files been working for you recently, > particularly today? Maybe with more data, we can see a pattern > (browser version, SSL/non-SSL, cluefulness of user, ...). I still can't upload patch files (even though I always supply a comment). Specifically, I getting the following error message in red at the top of the page after pressing the "Submit Changes" button: ArtifactFile: File name, type, size, and data are RequiredSuccessfully Updated FWIW, I'm using Netscape 4.72 on Windows. Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler@dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From tim.one@home.com Wed Feb 28 04:08:05 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 23:08:05 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: 
                              
                              Message-ID: 
                              
                              >> check_case will be used differently now. [Steven] > If check_case will be used differently, then why not just use > "#ifdef CHECK_IMPORT_CASE" as the switch? Sorry, I don't understand what you have in mind. In my mind, CHECK_IMPORT_CASE goes away, since we're attempting to get the same semantics on all platforms, and a yes/no #define doesn't carry enough info to accomplish that. From tim.one@home.com Wed Feb 28 04:29:33 2001 From: tim.one@home.com (Tim Peters) Date: Tue, 27 Feb 2001 23:29:33 -0500 Subject: [Python-Dev] RE: Case-sensitive import In-Reply-To: <20010227223216.C252@dothill.com> Message-ID: 
                              
                              [Tim] >> Jason, I *assume* that the existing "#if defined(MS_WIN32) || >> defined(__CYGWIN__)" version of check_case works already for >> you. Scream if that's wrong. [Jason] > I guess it depends on what you mean by "works." I meant that independent of errors you don't want to see, and independent of the allcaps8x3 silliness, check_case returns 1 if there's a case-sensitive match and 0 if not. > When I submitted my patch to enable case-sensitive imports for Cygwin, > I mistakenly thought that I was solving import problems such as "import > TERMIOS, termios". Unfortunately, I was only enabling the (old) Win32 > "Case mismatch for module name foo" code for Cygwin too. Then if you succeeded in enabling that, "it works" in the sense I meant. My intent is to stop the errors, take away the allcaps8x3 stuff, and change the *calling* code to just keep going when check_case returns 0. > Subsequently, there have been changes to Cygwin gcc that may make it > difficult (i.e., require non-standard -I options) to find Win32 header > files like "windows.h". So from an ease of building point of view, it > would be better to stick with POSIX calls and avoid direct Win32 ones. > Unfortunately, from an efficiency point of view, it sounds like this is > unavoidable. > > I would like to test your patch with both Cygwin gcc 2.95.2-6 (i.e., > Win32 friendly) and 2.95.2-7 (i.e., Unix bigot). Please let me know > when it's ready. Not terribly long after I get to stop writing email <0.9 wink>. But since the only platform I can test here is plain Windows, and Cygwin and sundry Mac variations appear to be moving targets, once it works on Windows I'm just going to check it in. You and Steven will then have to figure out what you need to do on your platforms. OK by me if you two recreate the HAVE_DIRENT_H stuff, but (a) not if Linux takes that path too; and, (b) if Cygwin ends up using that, please get rid of the Cygwin-specific tricks in the plain Windows case (this module is already one of the hardest to maintain, and having random pieces of #ifdef'ed code in it that will never be used hurts). From barry@digicool.com Wed Feb 28 05:05:30 2001 From: barry@digicool.com (Barry A. Warsaw) Date: Wed, 28 Feb 2001 00:05:30 -0500 Subject: [Python-Dev] Case-sensitive import References: 
                              
                              
                              Message-ID: <15004.34586.744058.938851@anthem.wooz.org> >>>>> "SDM" == Steven D Majewski 
                              
                              writes: SDM> Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a SDM> difference in the linux dirent structs. Actually, my Linux distro's dirent.h has almost the same test on _DIRENT_HAVE_D_NAMLEN (sic) -- which looking again now at import.c it's obvious I misspelled it! Tim, if you clean this code up and decide to continue to use the d_namlen slot, please fix the macro test. -Barry From akuchlin@mems-exchange.org Wed Feb 28 05:21:54 2001 From: akuchlin@mems-exchange.org (Andrew Kuchling) Date: Wed, 28 Feb 2001 00:21:54 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: <20010227230151.D252@dothill.com>; from Jason.Tishler@dothill.com on Tue, Feb 27, 2001 at 11:01:51PM -0500 References: 
                              
                              <20010227230151.D252@dothill.com> Message-ID: <20010228002154.A16737@newcnri.cnri.reston.va.us> On Tue, Feb 27, 2001 at 11:01:51PM -0500, Jason Tishler wrote: >I still can't upload patch files (even though I always supply a comment). >Specifically, I getting the following error message in red at the top >of the page after pressing the "Submit Changes" button: Same here. It's not from leaving the comment field empty (I got the error message too and figured out what it meant); instead I can fill in a comment, select a file, and upload it. The comment shows up; the attachment doesn't (using NS4.75 on Linux). Anyone got other failures to report? --amk From jeremy@alum.mit.edu Wed Feb 28 05:28:08 2001 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 00:28:08 -0500 (EST) Subject: [Python-Dev] puzzled about old checkin to pythonrun.c Message-ID: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> Fred, You made a change to the syntax error generation code last August. I don't understand what the code is doing. It appears that the code you added is redundant, but it's hard to tell for sure because responsbility for generating well-formed SyntaxErrors is spread across several files. The code you added in pythonrun.c, line 1084, in err_input(), starts with the test (v != NULL): w = Py_BuildValue("(sO)", msg, v); PyErr_SetObject(errtype, w); Py_XDECREF(w); if (v != NULL) { PyObject *exc, *tb; PyErr_Fetch(&errtype, &exc, &tb); PyErr_NormalizeException(&errtype, &exc, &tb); if (PyObject_SetAttrString(exc, "filename", PyTuple_GET_ITEM(v, 0))) PyErr_Clear(); if (PyObject_SetAttrString(exc, "lineno", PyTuple_GET_ITEM(v, 1))) PyErr_Clear(); if (PyObject_SetAttrString(exc, "offset", PyTuple_GET_ITEM(v, 2))) PyErr_Clear(); Py_DECREF(v); PyErr_Restore(errtype, exc, tb); } What's weird about this code is that the __init__ code for a SyntaxError (all errors will be SyntaxErrors at this point) sets filename, lineno, and offset. Each of the values is passed to the constructor as the tuple v; then the new code gets the items out of the tuple and sets the explicitly. You also made a bunch of changes to SyntaxError__str__ at the same time. I wonder if they were sufficient to fix the bug (which has tracker aid 210628 incidentally). Can you shed any light? Jeremy From tim.one@home.com Wed Feb 28 05:48:57 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 00:48:57 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: 
                              
                              Message-ID: 
                              
                              Here's the checkin comment for rev 2.163 of import.c: """ Implement PEP 235: Import on Case-Insensitive Platforms. http://python.sourceforge.net/peps/pep-0235.html Renamed check_case to case_ok. Substantial code rearrangement to get this stuff in one place in the file. Innermost loop of find_module() now much simpler and #ifdef-free, and I want to keep it that way (it's bad enough that the innermost loop is itself still in an #ifdef!). Windows semantics tested and are fine. Jason, Cygwin *should* be fine if and only if what you did for check_case() before still "works". Jack, the semantics on your flavor of Mac have definitely changed (see the PEP), and need to be tested. The intent is that your flavor of Mac now work the same as everything else in the "lower left" box, including respecting PYTHONCASEOK. There is a non-zero chance that I already changed the "#ifdef macintosh" code correctly to achieve that. Steven, sorry, you did the most work here so far but you got screwed the worst. Happy to work with you on repairing it, but I don't understand anything about all your Mac variants and don't have time to learn before the beta. We need to add another branch (or two, three, ...?) inside case_ok for you. But we should not need to change anything else. """ Someone please check Linux etc too, although everything that doesn't match one of these #ifdef's: #if defined(MS_WIN32) || defined(__CYGWIN__) #elif defined(DJGPP) #elif defined(macintosh) *should* act as if the platform filesystem were case-sensitive (i.e., that if fopen() succeeds, the case must match already and so there's no need for any more work to check that). Jason, if Cygwin is broken, please coordinate with Steven since you two appear to have similar problems then. [Steven] > __MACH__ is predefined, meaning mach system calls are supported, and > __APPLE__ is predefined -- I think it means it's Apple's compiler. So: > > #if defined(__MACH__) && defined(__APPLE__) > > ought to uniquely identify Darwin, at least until Apple does another OS. > > ( Maybe it would be cleaner to have config add -DDarwin switches -- or > if you want to get general -D$MACHDEP -- except that I don't think all > the values of MACHDEP will parse as symbols. ) This is up to you. I'm sorry to have broken your old code, but Barry should not have accepted it to begin with 
                              
                              . Speaking of which, [Barry] > SDM> Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a > SDM> difference in the linux dirent structs. > > Actually, my Linux distro's dirent.h has almost the same test on > _DIRENT_HAVE_D_NAMLEN (sic) -- which looking again now at import.c > it's obvious I misspelled it! > > Tim, if you clean this code up and decide to continue to use the > d_namlen slot, please fix the macro test. For now, I didn't change anything in the MatchFilename function, but put the entire thing in an "#if 0" block with an "XXX" comment, to make it easy for Steven and/or Jason to get at that source if one or both decide their platforms still need something like that. If they do, I'll double-check that this #define is spelled correctly when they check in their changes; else I'll delete that block before the release. Aren't release crunches great? Afraid they're infectious <0.5 wink>. From fdrake@acm.org Wed Feb 28 06:50:28 2001 From: fdrake@acm.org (Fred L. Drake, Jr.) Date: Wed, 28 Feb 2001 01:50:28 -0500 (EST) Subject: [Python-Dev] Re: puzzled about old checkin to pythonrun.c In-Reply-To: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15004.40884.236605.266085@cj42289-a.reston1.va.home.com> Jeremy Hylton writes: > Can you shed any light? Not at this hour -- fading fast. I'll look at it in the morning. -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From moshez@zadka.site.co.il Wed Feb 28 10:43:08 2001 From: moshez@zadka.site.co.il (Moshe Zadka) Date: Wed, 28 Feb 2001 12:43:08 +0200 (IST) Subject: [Python-Dev] urllib2 and urllib Message-ID: <20010228104308.BAB5BAA6A@darjeeling.zadka.site.co.il> (Full disclosure: I've been payed to hack on urllib2) For a long time I've been feeling that urllib is a bit hackish, and not really suited to conveniently script web sites. The classic example is the interface to passwords, whose default behaviour is to stop and ask the user(!). Jeremy had urllib2 out for about a year and a half, and now that I've finally managed to have a look at it, I'm very impressed with the architecture, and I think it's superior to urllib. >From the "outside" it's not that different then urllib, in that it has mainly a "urlopen" function (no urlretrieve, which I always felt misplaced). It's configurability is much different, though, and IMHO much more pleasent. The code, however, was a bit stale, and a bit too "play-groundish", though. Fortunately, I've been payed to add some features to the code, and I have already added most features from urllib which weren't there, and some features that are not in urllib (for example, proxy authentication). It will still need some work to be an industrial-strength client library (e.g., client-side cookie support, referer support in redirections, support for 303 redirection), but most of these are much easier to do based on what is currently urllib2. A major misfeature of urllib2 up to now was that it was not documented. Fortunately, my client saw it as a problem too, so I have a rough sketch of a library reference chapter, and I will write a Python HOWTO before finishing with this project. There are several problems with adopting urllib2 as the new standard library for client-side writing: 1. Not backwards compatible extension interface with urllib -- that's a real problem, because the current interface was *designed* to be different 2. The name: urllib2 is just an awful name for anything. It should be changed, and a compat. module named "urllib2" that from import *s from the new module. I don't have any strong feelings about the new name, as long is there are no numbers inside (<0.9 wink>) 3. Too close to beta: that's a valid concern, and it should be possible to say "newurl" is still expereimental in 2.1, and make it the official module only in 2.2 This al has to do with the libraries-voting-procedure (PEP-0002), which Eric has been neglecting lately..
                              
                              
                              
                              
                              (patch number 404826) -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez@debian.org |http://www.{python,debian,gnu}.org From Samuele Pedroni 
                              
                              Wed Feb 28 14:21:35 2001 From: Samuele Pedroni 
                              
                              (Samuele Pedroni) Date: Wed, 28 Feb 2001 15:21:35 +0100 (MET) Subject: [Python-Dev] pdb and nested scopes Message-ID: <200102281421.PAA17150@core.inf.ethz.ch> Hi. Sorry if everybody is already aware of this. I have checked the code for pdb in CVS , especially for the p cmd, maybe I'm wrong but given actual the implementation of things that gives no access to the value of free or cell variables. Should that be fixed? AFAIK pdb as it is works with jython too. So when fixing that, it would be nice if this would be preserved. regards, Samuele Pedroni. From jack@oratrix.nl Wed Feb 28 14:30:37 2001 From: jack@oratrix.nl (Jack Jansen) Date: Wed, 28 Feb 2001 15:30:37 +0100 Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message by barry@digicool.com (Barry A. Warsaw) , Wed, 28 Feb 2001 00:05:30 -0500 , <15004.34586.744058.938851@anthem.wooz.org> Message-ID: <20010228143037.8F32D371690@snelboot.oratrix.nl> Why don't we handle this the same way as, say, PyOS_CheckStack()? I.e. if USE_CHECK_IMPORT_CASE is defined it is necessary to check the case of the imported file (i.e. it's not defined on vanilla unix, defined on most other platforms) and if it is defined we call PyOS_CheckCase(filename, modulename). All these routines can be in different files, for all I care, similar to the dynload_*.c files. -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm From guido@digicool.com Wed Feb 28 15:34:52 2001 From: guido@digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 10:34:52 -0500 Subject: [Python-Dev] pdb and nested scopes In-Reply-To: Your message of "Wed, 28 Feb 2001 15:21:35 +0100." <200102281421.PAA17150@core.inf.ethz.ch> References: <200102281421.PAA17150@core.inf.ethz.ch> Message-ID: <200102281534.KAA28532@cj20424-a.reston1.va.home.com> > Hi. > > Sorry if everybody is already aware of this. No, it's new to me. > I have checked the code for pdb in CVS , especially for the p cmd, > maybe I'm wrong but given actual the implementation of things that > gives no access to the value of free or cell variables. Should that > be fixed? I think so. I've noted that the locals() function also doesn't see cell variables: from __future__ import nested_scopes import pdb def f(): a = 12 print locals() def g(): print a g() a = 100 g() #pdb.set_trace() f() This prints {} 12 100 When I enable the pdb.set_trace() call, indeed the variable a is not found. > AFAIK pdb as it is works with jython too. So when fixing that, it would > be nice if this would be preserved. Yes! --Guido van Rossum (home page: http://www.python.org/~guido/) From Jason.Tishler@dothill.com Wed Feb 28 17:02:29 2001 From: Jason.Tishler@dothill.com (Jason Tishler) Date: Wed, 28 Feb 2001 12:02:29 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: 
                              
                              ; from tim.one@home.com on Tue, Feb 27, 2001 at 11:29:33PM -0500 References: <20010227223216.C252@dothill.com> 
                              
                              Message-ID: <20010228120229.M449@dothill.com> Tim, On Tue, Feb 27, 2001 at 11:29:33PM -0500, Tim Peters wrote: > Not terribly long after I get to stop writing email <0.9 wink>. But since > the only platform I can test here is plain Windows, and Cygwin and sundry Mac > variations appear to be moving targets, once it works on Windows I'm just > going to check it in. You and Steven will then have to figure out what you > need to do on your platforms. I tested your changes on Cygwin and they work correctly. Thanks very much. Unfortunately, my concerns about building due to your implementation using direct Win32 APIs were realized. This delayed my response. The current Python CVS stills builds OOTB (with the exception of termios) with the current Cygwin gcc (i.e., 2.95.2-6). However, using the next Cygwin gcc (i.e., 2.95.2-8 or later) will require one to configure with: CC='gcc -mwin32' configure ... and the following minor patch be accepted: http://sourceforge.net/tracker/index.php?func=detail&aid=404928&group_id=5470&atid=305470 Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler@dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From guido@digicool.com Wed Feb 28 17:12:05 2001 From: guido@digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 12:12:05 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: Your message of "Wed, 28 Feb 2001 12:02:29 EST." <20010228120229.M449@dothill.com> References: <20010227223216.C252@dothill.com> 
                              
                              <20010228120229.M449@dothill.com> Message-ID: <200102281712.MAA29568@cj20424-a.reston1.va.home.com> > and the following minor patch be accepted: > > http://sourceforge.net/tracker/index.php?func=detail&aid=404928&group_id=5470&atid=305470 That patch seems fine -- except that I'd like /F to have a quick look since it changes _sre.c. --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik@pythonware.com Wed Feb 28 17:36:09 2001 From: fredrik@pythonware.com (Fredrik Lundh) Date: Wed, 28 Feb 2001 18:36:09 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _sre.c References: 
                              
                              Message-ID: <048b01c0a1ac$f10cf920$e46940d5@hagrid> tim indirectly wrote: > *** _sre.c 2001/01/16 07:37:30 2.52 > --- _sre.c 2001/02/28 16:44:18 2.53 > *************** > *** 2370,2377 **** > }; > > ! void > ! #if defined(WIN32) > ! __declspec(dllexport) > ! #endif > init_sre(void) > { > --- 2370,2374 ---- > }; > > ! DL_EXPORT(void) > init_sre(void) > { after this change, the separate makefile I use to build _sre on Windows no longer works (init_sre isn't exported). I don't really understand the code in config.h, but I've tried defining USE_DL_EXPORT (gives linking problems) and USE_DL_IMPORT (macro redefinition). any ideas? Cheers /F From tim.one@home.com Wed Feb 28 17:36:45 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 12:36:45 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: <20010228120229.M449@dothill.com> Message-ID: 
                              
                              [Jason] > I tested your changes on Cygwin and they work correctly. Thanks very much. Good! I guess that just leaves poor Steven hanging (although I've got ~200 emails I haven't gotten to yet, so maybe he's already pulled himself up). > Unfortunately, my concerns about building due to your implementation using > direct Win32 APIs were realized. This delayed my response. > > The current Python CVS stills builds OOTB (with the exception of termios) > with the current Cygwin gcc (i.e., 2.95.2-6). However, using the next > Cygwin gcc (i.e., 2.95.2-8 or later) will require one to configure with: > > CC='gcc -mwin32' configure ... > > and the following minor patch be accepted: > > http://sourceforge.net/tracker/index.php?func=detail&aid=404928&gro > up_id=5470&atid=305470 I checked that patch in already, about 15 minutes after you uploaded it. Is this service, or what?! 
                              
                              [Guido] > That patch seems fine -- except that I'd like /F to have a quick look > since it changes _sre.c. Too late and no need. What Jason did to _sre.c is *undo* some Cygwin special-casing; /F will like that. It's trivial anyway. Jason, about this: However, using the next Cygwin gcc (i.e., 2.95.2-8 or later) will require one to configure with: CC='gcc -mwin32' configure ... How can we make that info *useful* to people? The target audience for the Cygwin port probably doesn't search Python-Dev or the Python patches database. So it would be good if you thought about uploading an informational patch to README and Misc/NEWS briefly telling Cygwin folks what they need to know. If you do, I'll look for it and check it in. From tim.one@home.com Wed Feb 28 17:42:12 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 12:42:12 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _sre.c In-Reply-To: <048b01c0a1ac$f10cf920$e46940d5@hagrid> Message-ID: 
                              
                              >> *** _sre.c 2001/01/16 07:37:30 2.52 >> --- _sre.c 2001/02/28 16:44:18 2.53 >> *************** >> *** 2370,2377 **** >> }; >> >> ! void >> ! #if defined(WIN32) >> ! __declspec(dllexport) >> ! #endif >> init_sre(void) >> { >> --- 2370,2374 ---- >> }; >> >> ! DL_EXPORT(void) >> init_sre(void) >> { [/F] > after this change, the separate makefile I use to build _sre > on Windows no longer works (init_sre isn't exported). Oops! I tested it on Windows, so it works OK with the std build. > I don't really understand the code in config.h, Nobody does, alas. Mark Hammond and I have a delayed date to rework that. > but I've tried defining USE_DL_EXPORT (gives linking problems) and > USE_DL_IMPORT (macro redefinition). Sounds par for the course. > any ideas? Ya: the great thing about all these macros is that they're usually worse than useless (you try them, they break something). The _sre project has /export:init_sre buried in its link options. DL_EXPORT(void) expands to void. Not pretty, but it's the way everything else (outside the pythoncore project) works too. From jeremy@alum.mit.edu Wed Feb 28 17:58:58 2001 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 12:58:58 -0500 (EST) Subject: [Python-Dev] PEP 227 (was Re: Nested scopes resolution -- you can breathe again!) In-Reply-To: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> References: 
                              
                              <200102230259.VAA19238@cj20424-a.reston1.va.home.com> Message-ID: <15005.15458.703037.373890@w221.z064000254.bwi-md.dsl.cnc.net> Last week Guido sent a message about our decisions to make nested scopes an optional feature for 2.1 in advance of their mandatory introduction in Python 2.2. I've included an ndiff of the PEP for reference. The beta release on Friday will contain the features as described in the PEP. Jeremy -: old-pep-0227.txt +: pep-0227.txt PEP: 227 Title: Statically Nested Scopes - Version: $Revision: 1.6 $ ? ^ + Version: $Revision: 1.7 $ ? ^ Author: jeremy@digicool.com (Jeremy Hylton) Status: Draft Type: Standards Track Python-Version: 2.1 Created: 01-Nov-2000 Post-History: Abstract This PEP proposes the addition of statically nested scoping (lexical scoping) for Python 2.1. The current language definition defines exactly three namespaces that are used to resolve names -- the local, global, and built-in namespaces. The addition of nested scopes would allow resolution of unbound local names in enclosing functions' namespaces. One consequence of this change that will be most visible to Python programs is that lambda statements could reference variables in the namespaces where the lambda is defined. Currently, a lambda statement uses default arguments to explicitly creating bindings in the lambda's namespace. Introduction This proposal changes the rules for resolving free variables in - Python functions. The Python 2.0 definition specifies exactly - three namespaces to check for each name -- the local namespace, - the global namespace, and the builtin namespace. According to - this defintion, if a function A is defined within a function B, - the names bound in B are not visible in A. The proposal changes - the rules so that names bound in B are visible in A (unless A + Python functions. The new name resolution semantics will take + effect with Python 2.2. These semantics will also be available in + Python 2.1 by adding "from __future__ import nested_scopes" to the + top of a module. + + The Python 2.0 definition specifies exactly three namespaces to + check for each name -- the local namespace, the global namespace, + and the builtin namespace. According to this definition, if a + function A is defined within a function B, the names bound in B + are not visible in A. The proposal changes the rules so that + names bound in B are visible in A (unless A contains a name - contains a name binding that hides the binding in B). ? ---------------- + binding that hides the binding in B). The specification introduces rules for lexical scoping that are common in Algol-like languages. The combination of lexical scoping and existing support for first-class functions is reminiscent of Scheme. The changed scoping rules address two problems -- the limited - utility of lambda statements and the frequent confusion of new + utility of lagmbda statements and the frequent confusion of new ? + users familiar with other languages that support lexical scoping, e.g. the inability to define recursive functions except at the module level. + + XXX Konrad Hinsen suggests that this section be expanded The lambda statement introduces an unnamed function that contains a single statement. It is often used for callback functions. In the example below (written using the Python 2.0 rules), any name used in the body of the lambda must be explicitly passed as a default argument to the lambda. from Tkinter import * root = Tk() Button(root, text="Click here", command=lambda root=root: root.test.configure(text="...")) This approach is cumbersome, particularly when there are several names used in the body of the lambda. The long list of default arguments obscure the purpose of the code. The proposed solution, in crude terms, implements the default argument approach automatically. The "root=root" argument can be omitted. + The new name resolution semantics will cause some programs to + behave differently than they did under Python 2.0. In some cases, + programs will fail to compile. In other cases, names that were + previously resolved using the global namespace will be resolved + using the local namespace of an enclosing function. In Python + 2.1, warnings will be issued for all program statement that will + behave differently. + Specification Python is a statically scoped language with block structure, in the traditional of Algol. A code block or region, such as a - module, class defintion, or function body, is the basic unit of a + module, class definition, or function body, is the basic unit of a ? + program. Names refer to objects. Names are introduced by name binding operations. Each occurrence of a name in the program text refers to the binding of that name established in the innermost function block containing the use. The name binding operations are assignment, class and function definition, and import statements. Each assignment or import statement occurs within a block defined by a class or function definition or at the module level (the top-level code block). If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. (Note: This can lead to errors when a name is used within a block before it is bound.) If the global statement occurs within a block, all uses of the name specified in the statement refer to the binding of that name in the top-level namespace. Names are resolved in the top-level namespace by searching the global namespace, the namespace of the module containing the code block, and the builtin namespace, the namespace of the module __builtin__. The global namespace is searched first. If the name is not found there, the builtin - namespace is searched. + namespace is searched. The global statement must precede all uses + of the name. If a name is used within a code block, but it is not bound there and is not declared global, the use is treated as a reference to the nearest enclosing function region. (Note: If a region is contained within a class definition, the name bindings that occur in the class block are not visible to enclosed functions.) A class definition is an executable statement that may uses and definitions of names. These references follow the normal rules for name resolution. The namespace of the class definition becomes the attribute dictionary of the class. The following operations are name binding operations. If they occur within a block, they introduce new local names in the current block unless there is also a global declaration. - Function defintion: def name ... + Function definition: def name ... ? + Class definition: class name ... Assignment statement: name = ... Import statement: import name, import module as name, from module import name Implicit assignment: names are bound by for statements and except clauses The arguments of a function are also local. There are several cases where Python statements are illegal when used in conjunction with nested scopes that contain free variables. If a variable is referenced in an enclosing scope, it is an error to delete the name. The compiler will raise a SyntaxError for 'del name'. - If the wildcard form of import (import *) is used in a function + If the wild card form of import (import *) is used in a function ? + and the function contains a nested block with free variables, the compiler will raise a SyntaxError. If exec is used in a function and the function contains a nested block with free variables, the compiler will raise a SyntaxError - unless the exec explicit specifies the local namespace for the + unless the exec explicitly specifies the local namespace for the ? ++ exec. (In other words, "exec obj" would be illegal, but "exec obj in ns" would be legal.) + If a name bound in a function scope is also the name of a module + global name or a standard builtin name and the function contains a + nested function scope that references the name, the compiler will + issue a warning. The name resolution rules will result in + different bindings under Python 2.0 than under Python 2.2. The + warning indicates that the program may not run correctly with all + versions of Python. + Discussion The specified rules allow names defined in a function to be referenced in any nested function defined with that function. The name resolution rules are typical for statically scoped languages, with three primary exceptions: - Names in class scope are not accessible. - The global statement short-circuits the normal rules. - Variables are not declared. Names in class scope are not accessible. Names are resolved in - the innermost enclosing function scope. If a class defintion + the innermost enclosing function scope. If a class definition ? + occurs in a chain of nested scopes, the resolution process skips class definitions. This rule prevents odd interactions between class attributes and local variable access. If a name binding - operation occurs in a class defintion, it creates an attribute on + operation occurs in a class definition, it creates an attribute on ? + the resulting class object. To access this variable in a method, or in a function nested within a method, an attribute reference must be used, either via self or via the class name. An alternative would have been to allow name binding in class scope to behave exactly like name binding in function scope. This rule would allow class attributes to be referenced either via attribute reference or simple name. This option was ruled out because it would have been inconsistent with all other forms of class and instance attribute access, which always use attribute references. Code that used simple names would have been obscure. The global statement short-circuits the normal rules. Under the proposal, the global statement has exactly the same effect that it - does for Python 2.0. It's behavior is preserved for backwards ? - + does for Python 2.0. Its behavior is preserved for backwards compatibility. It is also noteworthy because it allows name binding operations performed in one block to change bindings in another block (the module). Variables are not declared. If a name binding operation occurs anywhere in a function, then that name is treated as local to the function and all references refer to the local binding. If a reference occurs before the name is bound, a NameError is raised. The only kind of declaration is the global statement, which allows programs to be written using mutable global variables. As a consequence, it is not possible to rebind a name defined in an enclosing scope. An assignment operation can only bind a name in the current scope or in the global scope. The lack of declarations and the inability to rebind names in enclosing scopes are unusual for lexically scoped languages; there is typically a mechanism to create name bindings (e.g. lambda and let in Scheme) and a mechanism to change the bindings (set! in Scheme). XXX Alex Martelli suggests comparison with Java, which does not allow name bindings to hide earlier bindings. Examples A few examples are included to illustrate the way the rules work. XXX Explain the examples >>> def make_adder(base): ... def adder(x): ... return base + x ... return adder >>> add5 = make_adder(5) >>> add5(6) 11 >>> def make_fact(): ... def fact(n): ... if n == 1: ... return 1L ... else: ... return n * fact(n - 1) ... return fact >>> fact = make_fact() >>> fact(7) 5040L >>> def make_wrapper(obj): ... class Wrapper: ... def __getattr__(self, attr): ... if attr[0] != '_': ... return getattr(obj, attr) ... else: ... raise AttributeError, attr ... return Wrapper() >>> class Test: ... public = 2 ... _private = 3 >>> w = make_wrapper(Test()) >>> w.public 2 >>> w._private Traceback (most recent call last): File "
                              
                              ", line 1, in ? AttributeError: _private - An example from Tim Peters of the potential pitfalls of nested scopes ? ^ -------------- + An example from Tim Peters demonstrates the potential pitfalls of ? +++ ^^^^^^^^ - in the absence of declarations: + nested scopes in the absence of declarations: ? ++++++++++++++ i = 6 def f(x): def g(): print i # ... # skip to the next page # ... for i in x: # ah, i *is* local to f, so this is what g sees pass g() The call to g() will refer to the variable i bound in f() by the for loop. If g() is called before the loop is executed, a NameError will be raised. XXX need some counterexamples Backwards compatibility There are two kinds of compatibility problems caused by nested scopes. In one case, code that behaved one way in earlier - versions, behaves differently because of nested scopes. In the ? - + versions behaves differently because of nested scopes. In the other cases, certain constructs interact badly with nested scopes and will trigger SyntaxErrors at compile time. The following example from Skip Montanaro illustrates the first kind of problem: x = 1 def f1(): x = 2 def inner(): print x inner() Under the Python 2.0 rules, the print statement inside inner() refers to the global variable x and will print 1 if f1() is called. Under the new rules, it refers to the f1()'s namespace, the nearest enclosing scope with a binding. The problem occurs only when a global variable and a local variable share the same name and a nested function uses that name to refer to the global variable. This is poor programming practice, because readers will easily confuse the two different variables. One example of this problem was found in the Python standard library during the implementation of nested scopes. To address this problem, which is unlikely to occur often, a static analysis tool that detects affected code will be written. - The detection problem is straightfoward. + The detection problem is straightforward. ? + - The other compatibility problem is casued by the use of 'import *' ? - + The other compatibility problem is caused by the use of 'import *' ? + and 'exec' in a function body, when that function contains a nested scope and the contained scope has free variables. For example: y = 1 def f(): exec "y = 'gotcha'" # or from module import * def g(): return y ... At compile-time, the compiler cannot tell whether an exec that - operators on the local namespace or an import * will introduce ? ^^ + operates on the local namespace or an import * will introduce ? ^ name bindings that shadow the global y. Thus, it is not possible to tell whether the reference to y in g() should refer to the global or to a local name in f(). In discussion of the python-list, people argued for both possible interpretations. On the one hand, some thought that the reference in g() should be bound to a local y if one exists. One problem with this interpretation is that it is impossible for a human reader of the code to determine the binding of y by local inspection. It seems likely to introduce subtle bugs. The other interpretation is to treat exec and import * as dynamic features that do not effect static scoping. Under this interpretation, the exec and import * would introduce local names, but those names would never be visible to nested scopes. In the specific example above, the code would behave exactly as it did in earlier versions of Python. - Since each interpretation is problemtatic and the exact meaning ? - + Since each interpretation is problematic and the exact meaning ambiguous, the compiler raises an exception. A brief review of three Python projects (the standard library, Zope, and a beta version of PyXPCOM) found four backwards compatibility issues in approximately 200,000 lines of code. There was one example of case #1 (subtle behavior change) and two examples of import * problems in the standard library. (The interpretation of the import * and exec restriction that was implemented in Python 2.1a2 was much more restrictive, based on language that in the reference manual that had never been enforced. These restrictions were relaxed following the release.) + Compatibility of C API + + The implementation causes several Python C API functions to + change, including PyCode_New(). As a result, C extensions may + need to be updated to work correctly with Python 2.1. + locals() / vars() These functions return a dictionary containing the current scope's local variables. Modifications to the dictionary do not affect the values of variables. Under the current rules, the use of locals() and globals() allows the program to gain access to all the namespaces in which names are resolved. An analogous function will not be provided for nested scopes. Under this proposal, it will not be possible to gain dictionary-style access to all visible scopes. + Warnings and Errors + + The compiler will issue warnings in Python 2.1 to help identify + programs that may not compile or run correctly under future + versions of Python. Under Python 2.2 or Python 2.1 if the + nested_scopes future statement is used, which are collectively + referred to as "future semantics" in this section, the compiler + will issue SyntaxErrors in some cases. + + The warnings typically apply when a function that contains a + nested function that has free variables. For example, if function + F contains a function G and G uses the builtin len(), then F is a + function that contains a nested function (G) with a free variable + (len). The label "free-in-nested" will be used to describe these + functions. + + import * used in function scope + + The language reference specifies that import * may only occur + in a module scope. (Sec. 6.11) The implementation of C + Python has supported import * at the function scope. + + If import * is used in the body of a free-in-nested function, + the compiler will issue a warning. Under future semantics, + the compiler will raise a SyntaxError. + + bare exec in function scope + + The exec statement allows two optional expressions following + the keyword "in" that specify the namespaces used for locals + and globals. An exec statement that omits both of these + namespaces is a bare exec. + + If a bare exec is used in the body of a free-in-nested + function, the compiler will issue a warning. Under future + semantics, the compiler will raise a SyntaxError. + + local shadows global + + If a free-in-nested function has a binding for a local + variable that (1) is used in a nested function and (2) is the + same as a global variable, the compiler will issue a warning. + Rebinding names in enclosing scopes There are technical issues that make it difficult to support rebinding of names in enclosing scopes, but the primary reason that it is not allowed in the current proposal is that Guido is opposed to it. It is difficult to support, because it would require a new mechanism that would allow the programmer to specify that an assignment in a block is supposed to rebind the name in an enclosing block; presumably a keyword or special syntax (x := 3) would make this possible. The proposed rules allow programmers to achieve the effect of rebinding, albeit awkwardly. The name that will be effectively rebound by enclosed functions is bound to a container object. In place of assignment, the program uses modification of the container to achieve the desired effect: def bank_account(initial_balance): balance = [initial_balance] def deposit(amount): balance[0] = balance[0] + amount return balance def withdraw(amount): balance[0] = balance[0] - amount return balance return deposit, withdraw Support for rebinding in nested scopes would make this code clearer. A class that defines deposit() and withdraw() methods and the balance as an instance variable would be clearer still. Since classes seem to achieve the same effect in a more straightforward manner, they are preferred. Implementation The implementation for C Python uses flat closures [1]. Each def or lambda statement that is executed will create a closure if the body of the function or any contained function has free variables. Using flat closures, the creation of closures is somewhat expensive but lookup is cheap. The implementation adds several new opcodes and two new kinds of names in code objects. A variable can be either a cell variable or a free variable for a particular code object. A cell variable is referenced by containing scopes; as a result, the function where it is defined must allocate separate storage for it on each - invocation. A free variable is reference via a function's closure. ? --------- + invocation. A free variable is referenced via a function's ? + + closure. + + The choice of free closures was made based on three factors. + First, nested functions are presumed to be used infrequently, + deeply nested (several levels of nesting) still less frequently. + Second, lookup of names in a nested scope should be fast. + Third, the use of nested scopes, particularly where a function + that access an enclosing scope is returned, should not prevent + unreferenced objects from being reclaimed by the garbage + collector. XXX Much more to say here References [1] Luca Cardelli. Compiling a functional language. In Proc. of the 1984 ACM Conference on Lisp and Functional Programming, pp. 208-217, Aug. 1984 http://citeseer.nj.nec.com/cardelli84compiling.html From tim.one@home.com Wed Feb 28 18:48:39 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 13:48:39 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: <20010228143037.8F32D371690@snelboot.oratrix.nl> Message-ID: 
                              
                              [Jack Jansen] > Why don't we handle this the same way as, say, PyOS_CheckStack()? > > I.e. if USE_CHECK_IMPORT_CASE is defined it is necessary to check > the case of the imported file (i.e. it's not defined on vanilla > unix, defined on most other platforms) and if it is defined we call > PyOS_CheckCase(filename, modulename). > All these routines can be in different files, for all I care, > similar to the dynload_*.c files. A. I want the code in the CVS tree. That some of your Mac code is not in the CVS tree creates problems for everyone (we can never guess whether we're breaking your code because we have no idea what your code is). B. PyOS_CheckCase() is not of general use. It's only of interest inside import.c, so it's better to live there as a static function. C. I very much enjoyed getting rid of the obfuscating #ifdef CHECK_IMPORT_CASE blocks in import.c! This code is hard enough to follow without distributing preprocessor tricks all over the place. Now they live only inside the body of case_ok(), where they're truly needed. That is, case_ok() is a perfectly sensible cross-platfrom abstraction, and *calling* code doesn't need to be bothered with how it's implemented-- or even whether it's needed --on various platfroms. On Linux, case_ok() reduces to the one-liner "return 1;", and I don't mind paying a function call in return for the increase in clarity inside find_module(). D. The schedule says we release the beta tomorrow <0.6 wink>. From Jason.Tishler@dothill.com Wed Feb 28 19:41:37 2001 From: Jason.Tishler@dothill.com (Jason Tishler) Date: Wed, 28 Feb 2001 14:41:37 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _sre.c In-Reply-To: <048b01c0a1ac$f10cf920$e46940d5@hagrid>; from fredrik@pythonware.com on Wed, Feb 28, 2001 at 06:36:09PM +0100 References: 
                              
                              <048b01c0a1ac$f10cf920$e46940d5@hagrid> Message-ID: <20010228144137.P449@dothill.com> --Hj+UoetjUl0PiTw5 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Fredrik, On Wed, Feb 28, 2001 at 06:36:09PM +0100, Fredrik Lundh wrote: > tim indirectly wrote: > > > *** _sre.c 2001/01/16 07:37:30 2.52 > > --- _sre.c 2001/02/28 16:44:18 2.53 > [snip] > > after this change, the separate makefile I use to build _sre > on Windows no longer works (init_sre isn't exported). > > I don't really understand the code in config.h, but I've tried > defining USE_DL_EXPORT (gives linking problems) and > USE_DL_IMPORT (macro redefinition). USE_DL_EXPORT is to be defined only when building the Win32 (and Cygwin) DLL core not when building extensions. When building Win32 Python, USE_DL_IMPORT is implicitly defined in PC/config.h when USE_DL_EXPORT is not. Explicitly defining USE_DL_IMPORT will cause the macro redefinition warning indicated above -- but no other ill or good effect. Another way to solve your problem without using the "/export:init_sre" link option is by patching PC/config.h with the attached. When I was converting Cygwin Python to use a DLL core instead of a static library one, I wondered why the USE_DL_IMPORT case was missing the following: #define DL_EXPORT(RTYPE) __declspec(dllexport) RTYPE Anyway, sorry that I caused you some heartache. Jason P.S. If this patch is to be seriously considered, then the analogous change should be done for the other Win32 compilers (e.g. Borland). -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler@dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com --Hj+UoetjUl0PiTw5 Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="config.h.patch" Index: config.h =================================================================== RCS file: /cvsroot/python/python/dist/src/PC/config.h,v retrieving revision 1.49 diff -u -r1.49 config.h --- config.h 2001/02/28 08:15:16 1.49 +++ config.h 2001/02/28 19:16:52 @@ -118,6 +118,7 @@ #endif #ifdef USE_DL_IMPORT #define DL_IMPORT(RTYPE) __declspec(dllimport) RTYPE +#define DL_EXPORT(RTYPE) __declspec(dllexport) RTYPE #endif #ifdef USE_DL_EXPORT #define DL_IMPORT(RTYPE) __declspec(dllexport) RTYPE --Hj+UoetjUl0PiTw5-- From Jason.Tishler@dothill.com Wed Feb 28 20:17:28 2001 From: Jason.Tishler@dothill.com (Jason Tishler) Date: Wed, 28 Feb 2001 15:17:28 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: 
                              
                              ; from tim.one@home.com on Wed, Feb 28, 2001 at 12:36:45PM -0500 References: <20010228120229.M449@dothill.com> 
                              
                              Message-ID: <20010228151728.Q449@dothill.com> Tim, On Wed, Feb 28, 2001 at 12:36:45PM -0500, Tim Peters wrote: > I checked that patch in already, about 15 minutes after you uploaded it. Is > this service, or what?! 
                              
                              Yes! Thanks again. > [Guido] > > That patch seems fine -- except that I'd like /F to have a quick look > > since it changes _sre.c. > > Too late and no need. What Jason did to _sre.c is *undo* some Cygwin > special-casing; Not really -- I was trying to get rid of WIN32 #ifdefs. My solution was to attempt to reuse the DL_EXPORT macro. Now I realize that I should have done the following instead: #if defined(WIN32) || defined(__CYGWIN__) __declspec(dllexport) #endif > /F will like that. Apparently not. > It's trivial anyway. I thought so too. > Jason, about this: > > However, using the next Cygwin gcc (i.e., 2.95.2-8 or later) will > require one to configure with: > > CC='gcc -mwin32' configure ... > > How can we make that info *useful* to people? I have posted to the Cygwin mailing list and C.L.P regarding my original 2.0 patches. I have also continue to post to Cygwin regarding 2.1a1 and 2.1a2. I intended to do likewise for 2.1b1, etc. > The target audience for the > Cygwin port probably doesn't search Python-Dev or the Python patches > database. Agreed -- the above was only offered to the curious Python-Dev person and not for archival purposes. > So it would be good if you thought about uploading an > informational patch to README and Misc/NEWS briefly telling Cygwin folks what > they need to know. If you do, I'll look for it and check it in. I will submit a patch to README to add a Cygwin section to "Platform specific notes". Unfortunately, I don't think that I can squeeze it in by 2.1b1. If not, then I will submit it for the next release (2.1b2 or 2.1 final). I also don't mind waiting for the Cygwin gcc stuff to settle down too. I know...excuses, excuses... Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler@dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From tim.one@home.com Wed Feb 28 22:12:47 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:12:47 -0500 Subject: [Python-Dev] test_inspect.py still fails under -O In-Reply-To: 
                              
                              Message-ID: 
                              
                              > python -O ../lib/test/test_inspect.py Traceback (most recent call last): File "../lib/test/test_inspect.py", line 172, in ? 'trace() row 1') File "../lib/test/test_inspect.py", line 70, in test raise TestFailed, message % args test_support.TestFailed: trace() row 1 > git.tr[0][1:] is ('@test', 8, 'spam', ['def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h):\n'], 0) at this point. The test expects it to be ('@test', 9, 'spam', [' eggs(b + d, c + f)\n'], 0) Test passes without -O. This was on Windows. Other platforms? From tim.one@home.com Wed Feb 28 22:21:02 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:21:02 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: <20010228151728.Q449@dothill.com> Message-ID: 
                              
                              [Jason Tishler] > ... > Not really -- I was trying to get rid of WIN32 #ifdefs. My solution was > to attempt to reuse the DL_EXPORT macro. Now I realize that I should > have done the following instead: > > #if defined(WIN32) || defined(__CYGWIN__) > __declspec(dllexport) > #endif Na, you did good! If /F wants to bark at someone, he should bark at me, because I reviewed the patch carefully before checking it in and it's the same thing I would have done. MarkH and I have long-delayed plans to change these macro schemes to make some sense, and the existing DL_EXPORT uses-- no matter how useless now --will be handy to look for when we change the appropriate ones to, e.g., DL_MODULE_ENTRY_POINT macros (that always expand to the correct platform-specific decl gimmicks). _sre.c was the oddball here. > ... > I will submit a patch to README to add a Cygwin section to "Platform > specific notes". Unfortunately, I don't think that I can squeeze it in > by 2.1b1. If not, then I will submit it for the next release (2.1b2 or 2.1 > final). I also don't mind waiting for the Cygwin gcc stuff to settle > down too. I know...excuses, excuses... That's fine. You know the Cygwin audience better than I do -- as I've proved beyond reasonable doubt several times 
                              
                              . And thank you for your Cygwin work -- someday I hope to use Cygwin for more than just running "patch" on this box 
                              
                              ... From martin@loewis.home.cs.tu-berlin.de Wed Feb 28 22:19:13 2001 From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Wed, 28 Feb 2001 23:19:13 +0100 Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax Message-ID: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> PEP 236 states that the intention of the proposed feature is to allow modules "to request that the code in module M use the new syntax or semantics in the current release C". It achieves this by introducing a new statement, the future_statement. This looks like an import statement, but isn't. The PEP author admits that 'overloading "import" does suck'. I agree (not surprisingly, since Tim added this QA item after we discussed it in email). It also says "But if we introduce a new keyword, that in itself would break old code". Here I disagree, and I propose patch 404997 as an alternative (https://sourceforge.net/tracker/index.php?func=detail&aid=404997&group_id=5470&atid=305470) 
                              
                              In essence, with that patch, you would write directive nested_scopes instead of from __future__ import nested_scopes This looks like as it would add a new keyword directive, and thus break code that uses "directive" as an identifier, but it doesn't. In this release, "directive" is only a keyword if it is the first keyword in a file (i.e. potentially after a doc string, but not after any other keyword). So class directive: def __init__(self, directive): self.directive = directive continues to work as it did in previous releases (it does not even produce a warning, but could if desired). Only when you do directive nested_scopes directive braces class directive: def __init__(self, directive): self.directive = directive you get a syntax error, since "directive" is then a keyword in that module. The directive statement has a similar syntax to the C #pragma "statement", in that each directive has a name and an optional argument. The choice of the keyword "directive" is somewhat arbitrary; it was deliberately not "pragma", since that implies an implementation-defined semantics (which directive does not have). In terms of backwards compatibility, it behaves similar to "from __future__ import ...": older releases will give a SyntaxError for the directive syntax (instead of an ImportError, which a __future__ import will give). "Unknown" directives will also give a SyntaxError, similar to the ImportError from the __future__ import.
                               Please let me know what you think. If you think this should be written down in a PEP, I'd request that the specification above is added into PEP 236. Regards, Martin From fredrik@effbot.org Wed Feb 28 22:42:56 2001 From: fredrik@effbot.org (Fredrik Lundh) Date: Wed, 28 Feb 2001 23:42:56 +0100 Subject: [Python-Dev] test_inspect.py still fails under -O References: 
                              
                              Message-ID: <06c501c0a1d7$cdd346f0$e46940d5@hagrid> tim wrote: > git.tr[0][1:] is > > ('@test', 8, 'spam', > ['def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h):\n'], > 0) > > at this point. The test expects it to be > > ('@test', 9, 'spam', > [' eggs(b + d, c + f)\n'], > 0) > > Test passes without -O. the code doesn't take LINENO optimization into account. tentative patch follows: Index: Lib/inspect.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/inspect.py,v retrieving revision 1.2 diff -u -r1.2 inspect.py --- Lib/inspect.py 2001/02/28 08:26:44 1.2 +++ Lib/inspect.py 2001/02/28 22:35:49 @@ -561,19 +561,19 @@ filename = getsourcefile(frame) if context > 0: - start = frame.f_lineno - 1 - context/2 + start = _lineno(frame) - 1 - context/2 try: lines, lnum = findsource(frame) start = max(start, 1) start = min(start, len(lines) - context) lines = lines[start:start+context] - index = frame.f_lineno - 1 - start + index = _lineno(frame) - 1 - start except: lines = index = None else: lines = index = None - return (filename, frame.f_lineno, frame.f_code.co_name, lines, index) + return (filename, _lineno(frame), frame.f_code.co_name, lines, index) def getouterframes(frame, context=1): """Get a list of records for a frame and all higher (calling) frames. @@ -614,3 +614,26 @@ def trace(context=1): """Return a list of records for the stack below the current exception.""" return getinnerframes(sys.exc_traceback, context) + +def _lineno(frame): + # Coded by Marc-Andre Lemburg from the example of PyCode_Addr2Line() + # in compile.c. + # Revised version by Jim Hugunin to work with JPython too. + # Adapted for inspect.py by Fredrik Lundh + + lineno = frame.f_lineno + + c = frame.f_code + if not hasattr(c, 'co_lnotab'): + return tb.tb_lineno + + tab = c.co_lnotab + line = c.co_firstlineno + stopat = frame.f_lasti + addr = 0 + for i in range(0, len(tab), 2): + addr = addr + ord(tab[i]) + if addr > stopat: + break + line = line + ord(tab[i+1]) + return line Cheers /F From tim.one@home.com Wed Feb 28 22:42:16 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:42:16 -0500 Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax In-Reply-To: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> Message-ID: 
                              
                              [Martin v. Loewis] > ... > If you think this should be written down in a PEP, Yes. > I'd request that the specification above is added into PEP 236. No -- PEP 236 is not a general directive PEP, no matter how much that what you *want* is a general directive PEP. I'll add a Q/A pair to 236 about why it's not a general directive PEP, but that's it. PEP 236 stands on its own for what it's designed for; your PEP should stand on its own for what *it's* designed for (which isn't nested_scopes et alia, it's character encodings). (BTW, there is no patch attached to patch 404997 -- see other recent msgs about people having problems uploading files to SF; maybe you could just put a patch URL in a comment now?] From fredrik@effbot.org Wed Feb 28 22:49:57 2001 From: fredrik@effbot.org (Fredrik Lundh) Date: Wed, 28 Feb 2001 23:49:57 +0100 Subject: [Python-Dev] test_inspect.py still fails under -O References: 
                              
                              <06c501c0a1d7$cdd346f0$e46940d5@hagrid> Message-ID: <071401c0a1d8$c830e7b0$e46940d5@hagrid> I wrote: > + lineno = frame.f_lineno > + > + c = frame.f_code > + if not hasattr(c, 'co_lnotab'): > + return tb.tb_lineno that "return" statement should be: return lineno Cheers /F From guido@digicool.com Wed Feb 28 22:48:51 2001 From: guido@digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 17:48:51 -0500 Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax In-Reply-To: Your message of "Wed, 28 Feb 2001 23:19:13 +0100." <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> References: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> Message-ID: <200102282248.RAA31007@cj20424-a.reston1.va.home.com> Martin, this looks nice, but where's the patch? (Not in the patch mgr.) We're planning the b1 release for Friday -- in two days. We need some time for our code base to stabilize. There's one downside to the "directive" syntax: other tools that parse Python will have to be adapted. The __future__ hack doesn't need that. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one@home.com Wed Feb 28 22:52:33 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:52:33 -0500 Subject: [Python-Dev] Very recent test_global failure Message-ID: 
                              
                              Windows. > python ../lib/test/regrtest.py test_global test_global 
                              
                              :2: SyntaxWarning: name 'a' is assigned to before global declaration 
                              
                              :2: SyntaxWarning: name 'b' is assigned to before global declaration The actual stdout doesn't match the expected stdout. This much did match (between asterisk lines): ********************************************************************** test_global ********************************************************************** Then ... We expected (repr): 'got SyntaxWarning as e' But instead we got: 'expected SyntaxWarning' test test_global failed -- Writing: 'expected SyntaxWarning', expected: 'got SyntaxWarning as e' 1 test failed: test_global > From jeremy@alum.mit.edu Wed Feb 28 22:40:05 2001 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 17:40:05 -0500 (EST) Subject: [Python-Dev] Very recent test_global failure In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> Just fixed. Guido's new, handy-dandy warning helper for the compiler checks for a warning that has been turned into an error. If the warning becomes an error, the SyntaxWarning is replaced with a SyntaxError. The change broke this test, but was otherwise a good thing. It allows reasonable tracebacks to be produced. Jeremy From tim.one@home.com Wed Feb 28 23:01:34 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 18:01:34 -0500 Subject: [Python-Dev] Very recent test_global failure In-Reply-To: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              > Just fixed. Not fixed; can no longer compile Python: compile.c C:\Code\python\dist\src\Python\compile.c(4184) : error C2065: 'DEF_BOUND' : undeclared identifier From jeremy@alum.mit.edu Wed Feb 28 22:48:15 2001 From: jeremy@alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 17:48:15 -0500 (EST) Subject: [Python-Dev] Very recent test_global failure In-Reply-To: 
                              
                              References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              Message-ID: <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> Oops. Missed a checkin to symtable.h. unix-users-prepare-to-recompile-everything-ly y'rs, Jeremy From ping@lfw.org Wed Feb 28 23:11:59 2001 From: ping@lfw.org (Ka-Ping Yee) Date: Wed, 28 Feb 2001 15:11:59 -0800 (PST) Subject: [Python-Dev] Re: A few small issues In-Reply-To: 
                              
                              Message-ID: 
                              
                              Hi again. On Tue, 27 Feb 2001, Ka-Ping Yee wrote: > > 1. The error message for UnboundLocalError isn't really accurate. [...] > UnboundLocalError: local name 'x' is not defined I'd like to check in this change today to make it into the beta. It's a tiny change, shouldn't break anything as i don't see how code would rely on the wording of the message, and makes the message more accurate. Lib/test/test_scope.py checks for the error but does not rely on its wording. If i don't see objections i'll do this tonight. I hope this is minor enough not to be a violation of etiquette. -- ?!ng From tim.one@home.com Wed Feb 28 23:13:04 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 18:13:04 -0500 Subject: [Python-Dev] Very recent test_global failure In-Reply-To: <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              > Oops. Missed a checkin to symtable.h. > > unix-users-prepare-to-recompile-everything-ly y'rs, > Jeremy Got that patch, everything compiles now, but test_global still fails. Are we, perhaps, missing an update to test_global's expected-output file too? From tim.one@home.com Wed Feb 28 23:21:15 2001 From: tim.one@home.com (Tim Peters) Date: Wed, 28 Feb 2001 18:21:15 -0500 Subject: [Python-Dev] Re: A few small issues In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Ka-Ping Yee] > On Tue, 27 Feb 2001, Ka-Ping Yee wrote: > > > > 1. The error message for UnboundLocalError isn't really accurate. > [...] > > UnboundLocalError: local name 'x' is not defined > > I'd like to check in this change today to make it into the beta. > It's a tiny change, shouldn't break anything as i don't see how > code would rely on the wording of the message, and makes the > message more accurate. Lib/test/test_scope.py checks for the > error but does not rely on its wording. > > If i don't see objections i'll do this tonight. I hope this is > minor enough not to be a violation of etiquette. Sorry, but I really didn't like this change. You had to contrive a test case using "del" for the old local variable 'x' referenced before assignment msg to appear inaccurate the way you read it. The old msg is much more on-target 99.999% of the time than just saying "not defined", in non-contrived test cases. Even in the "del" case, it's *still* the case that the vrbl was referenced before assignment (but after "del"). So -1, on the grounds that the new msg is worse (because less specific) almost all the time. From guido@digicool.com Wed Feb 28 23:25:30 2001 From: guido@digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 18:25:30 -0500 Subject: [Python-Dev] Re: A few small issues In-Reply-To: Your message of "Wed, 28 Feb 2001 15:11:59 PST." 
                              
                              References: 
                              
                              Message-ID: <200102282325.SAA31347@cj20424-a.reston1.va.home.com> > On Tue, 27 Feb 2001, Ka-Ping Yee wrote: > > > > 1. The error message for UnboundLocalError isn't really accurate. > [...] > > UnboundLocalError: local name 'x' is not defined > > I'd like to check in this change today to make it into the beta. > It's a tiny change, shouldn't break anything as i don't see how > code would rely on the wording of the message, and makes the > message more accurate. Lib/test/test_scope.py checks for the > error but does not rely on its wording. > > If i don't see objections i'll do this tonight. I hope this is > minor enough not to be a violation of etiquette. +1, but first address the comments about test_inspect.py with -O. --Guido van Rossum (home page: http://www.python.org/~guido/) From nas@arctrix.com Wed Feb 28 23:30:23 2001 From: nas@arctrix.com (Neil Schemenauer) Date: Wed, 28 Feb 2001 15:30:23 -0800 Subject: [Python-Dev] Re: A few small issues In-Reply-To: 
                              
                              ; from tim.one@home.com on Wed, Feb 28, 2001 at 06:21:15PM -0500 References: 
                              
                              
                              Message-ID: <20010228153023.A5998@glacier.fnational.com> On Wed, Feb 28, 2001 at 06:21:15PM -0500, Tim Peters wrote: > So -1, on the grounds that the new msg is worse (because less specific) > almost all the time. I too vote -1 on the proposed new message (but not -1 on changing to current message). Neil From guido@digicool.com Wed Feb 28 23:37:01 2001 From: guido@digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 18:37:01 -0500 Subject: [Python-Dev] Re: A few small issues In-Reply-To: Your message of "Wed, 28 Feb 2001 18:21:15 EST." 
                              
                              References: 
                              
                              Message-ID: <200102282337.SAA31934@cj20424-a.reston1.va.home.com> Based on Tim's comment I change my +1 into a -1. I had forgotten the context. --Guido van Rossum (home page: http://www.python.org/~guido/) From fred@digicool.com Wed Feb 28 22:35:46 2001 From: fred@digicool.com (Fred L. Drake, Jr.) Date: Wed, 28 Feb 2001 17:35:46 -0500 (EST) Subject: [Python-Dev] Re: puzzled about old checkin to pythonrun.c In-Reply-To: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15005.32066.814181.946890@localhost.localdomain> Jeremy Hylton writes: > You made a change to the syntax error generation code last August. > I don't understand what the code is doing. It appears that the code > you added is redundant, but it's hard to tell for sure because > responsbility for generating well-formed SyntaxErrors is spread > across several files. This is probably the biggest reason it's taken so long to get things into the ballpark! > The code you added in pythonrun.c, line 1084, in err_input(), starts > with the test (v != NULL): I've ripped all that out. > Can you shed any light? Was this all the light you needed? Or was there something deeper that I'm missing? -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From moshez at zadka.site.co.il Thu Feb 1 14:17:53 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Thu, 1 Feb 2001 15:17:53 +0200 (IST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include - really begs for a PEP In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <20010201131753.C8CB1A840@darjeeling.zadka.site.co.il> On Thu, 1 Feb 2001 03:31:33 -0800 (PST), Ka-Ping Yee 
                              
                              wrote: [about for (k, v) in dict.iteritems(): ] > I remember considering this solution when i was writing the PEP. > The problem with it is that it isn't backward-compatible. It won't > work on existing dictionary-like objects -- it just introduces > another method that we then have to go back and implement on everything, > which kind of defeats the point of the whole proposal. Well, in that case we have differing views on the point of the whole proposal. I won't argue -- I think all the opinions have been aired, and it should be pronounced upon. > The other problem with this is that it isn't feasible in practice > unless 'for' can magically detect when the thing is a sequence and > when it's an iterator. I don't see any obvious solution to this dict.iteritems() could return not an iterator, but a magical object whose iterator is the requested iterator. Ditto itervalues(), iterkeys() -- Moshe Zadka 
                              
                              This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From jeremy at alum.mit.edu Thu Feb 1 17:21:30 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 1 Feb 2001 11:21:30 -0500 (EST) Subject: [Python-Dev] any opinion on 'make quicktest'? Message-ID: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> I run the regression test a lot. I have found that it is often useful to exclude some of the slowest tests for most of the test runs and then do a full test run before I commit changes. Would anyone be opposed to a quicktest target in the Makefile that supports this practice? There are a small number of tests that each take at least 10 seconds to complete. Jeremy Index: Makefile.pre.in =================================================================== RCS file: /cvsroot/python/python/dist/src/Makefile.pre.in,v retrieving revision 1.8 diff -c -r1.8 Makefile.pre.in *** Makefile.pre.in 2001/01/29 20:18:59 1.8 --- Makefile.pre.in 2001/02/01 16:19:37 *************** *** 472,477 **** --- 472,484 ---- -PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(TESTOPTS) PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(TESTOPTS) + QUICKTESTOPTS= $(TESTOPTS) -x test_thread test_signal test_strftime \ + test_unicodedata test_re test_sre test_select test_poll + quicktest: all platform + -rm -f $(srcdir)/Lib/test/*.py[co] + -PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) + PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) + # Install everything install: altinstall bininstall maninstall From greg at cosc.canterbury.ac.nz Thu Feb 1 00:21:04 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 01 Feb 2001 12:21:04 +1300 (NZDT) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include - really begs for a PEP In-Reply-To: <14968.16962.830739.920771@anthem.wooz.org> Message-ID: <200101312321.MAA03263@s454.cosc.canterbury.ac.nz> barry at digicool.com (Barry A. Warsaw): > for key in dict.iterator(KEYS) > for value in dict.iterator(VALUES) > for key, value in dict.iterator(ITEMS) Yuck. I don't like any of this "for x in y.iterator_something()" stuff. The things you're after aren't "in" the iterator, they're "in" the dict. I don't want to know that there are iterators involved. We seem to be coming up with more and more convoluted ways to say things that should be very straightforward. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Thu Feb 1 00:25:54 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 31 Jan 2001 18:25:54 -0500 Subject: [Python-Dev] Making mutable objects readonly In-Reply-To: <200101301500.KAA25733@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Ping] > Is a frozen list hashable? [Guido] > Yes -- that's what started this thread (using dicts as dict keys, > actually). Except this doesn't actually work unless list.freeze() recursively ensures that all elements in the list are frozen too: >>> hash((1, 2)) 219750523 >>> hash((1, [2])) Traceback (most recent call last): File "
                              
                              ", line 1, in ? TypeError: unhashable type >>> That bothered me in Eric's original suggestion: unless x.freeze() does a traversal of all objects reachable from x, it doesn't actually make x safe against modification (except at the very topmost level). But doing such a traversal isn't what *everyone* would want either (as with "const" in C, I expect the primary benefit would be the chance to spend countless hours worming around it in both directions 
                              
                              ). [Skip] > If you want immutable dicts or lists in order to use them as > dictionary keys, just serialize them first: > > survey_says = {"spam": 14, "eggs": 42} > sl = marshal.dumps(survey_says) > dict[sl] = "spam" marshal.dumps(dict) isn't canonical, though. That is, it may well be that d1 == d2 but dumps(d1) != dumps(d2). Even materializing dict.values(), then sorting it, then marshaling *that* isn't enough; e.g., consider {1: 1} and {1: 1L}. The latter example applies to marshaling lists too. From greg at cosc.canterbury.ac.nz Thu Feb 1 00:34:50 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 01 Feb 2001 12:34:50 +1300 (NZDT) Subject: [Python-Dev] Making mutable objects readonly In-Reply-To: <14968.14631.419491.440774@beluga.mojam.com> Message-ID: <200101312334.MAA03267@s454.cosc.canterbury.ac.nz> Skip Montanaro 
                              
                              : > Can someone give me an example where this is actually useful and > can't be handled through some existing mechanism? I can envisage cases where you want to build a data structure incrementally, and then treat it as immutable so you can use it as a dict key, etc. There's currently no way to do that to a list without copying it. So, it could be handy to have a way of turning a list into a tuple in-place. It would have to be a one-way transformation, otherwise you could start using it as a dict key, make it mutable again, and cause havoc. Suggested implementation: When you allocate the space for the values of a list, leave enough room for the PyObject_HEAD of a tuple at the beginning. Then you can turn that memory block into a real tuple later, and flag the original list object as immutable so you can't change it later via that route. Hmmm, would waste a bit of space for each list object. Maybe this should be a special list-about-to-become-tuple type. (Tist? Luple?) Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Thu Feb 1 00:36:48 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 31 Jan 2001 18:36:48 -0500 Subject: [Python-Dev] RE: [Patch #103203] PEP 205: weak references implementation In-Reply-To: 
                              
                              Message-ID: 
                              
                              > Patch #103203 has been updated. > > Project: python > Category: core (C code) > Status: Open > Submitted by: fdrake > Assigned to : tim_one > Summary: PEP 205: weak references implementation Fred, just noticed the new "assigned to". If you don't think it's a disaster(*), check it in! That will force more eyeballs on it quickly, and the quicker the better. I'm simply not going to do a decent review quickly on something this large starting cold. More urgently, I've been working long hours every day for several weeks, and need a break so I don't screw up last-second crises tomorrow. has-12-hours-of-taped-professional-wrestling-to-catch-up-on-ly y'rs - tim (*) otoh, if you do think it's a disaster, withdraw it for 2.1. From greg at cosc.canterbury.ac.nz Thu Feb 1 00:54:45 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 01 Feb 2001 12:54:45 +1300 (NZDT) Subject: [Python-Dev] Generator protocol? (Re: Sets: elt in dict, lst.include) In-Reply-To: <20010131063007.536ACA83E@darjeeling.zadka.site.co.il> Message-ID: <200101312354.MAA03272@s454.cosc.canterbury.ac.nz> Moshe Zadka 
                              
                              : > Tim's "try to use that to write something that > will return the nodes of a binary tree" still haunts me. Instead of an iterator protocol, how about a generator protocol? Now that we're getting nested scopes, it should be possible to arrange it so that for x in thing: ...stuff... gets compiled as something like def _body(x): ...stuff... thing.__generate__(_body) (Actually it would be more complicated than that - for backward compatibility you'd want a new bytecode that would look for a __generator__ attribute and emulate the old iteration protocol otherwise.) Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From greg at cosc.canterbury.ac.nz Thu Feb 1 00:57:39 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 01 Feb 2001 12:57:39 +1300 (NZDT) Subject: [Python-Dev] codecity.com In-Reply-To: <200101310521.AAA31653@cj20424-a.reston1.va.home.com> Message-ID: <200101312357.MAA03275@s454.cosc.canterbury.ac.nz> > Should I spread this word, or is this a joke? I'm not sure what answering trivia questions has to do with the stated intention of "teaching jr. programmers how to write code". Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From greg at cosc.canterbury.ac.nz Thu Feb 1 00:59:33 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 01 Feb 2001 12:59:33 +1300 (NZDT) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <200101310049.TAA30197@cj20424-a.reston1.va.home.com> Message-ID: <200101312359.MAA03278@s454.cosc.canterbury.ac.nz> Guido van Rossum 
                              
                              : > But it *is* true that coroutines are a very attractice piece of land > "just nextdoor". Unfortunately there's a big high fence in between topped with barbed wire and patrolled by vicious guard dogs. :-( Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From jeremy at alum.mit.edu Thu Feb 1 01:36:11 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 31 Jan 2001 19:36:11 -0500 (EST) Subject: [Python-Dev] rethinking import-related syntax errors In-Reply-To: <200101302042.PAA29301@cj20424-a.reston1.va.home.com> References: 
                              
                              <20010130075515.X962@xs4all.nl> <200101301506.KAA25763@cj20424-a.reston1.va.home.com> <20010130165204.I962@xs4all.nl> <200101302042.PAA29301@cj20424-a.reston1.va.home.com> Message-ID: <14968.44923.774323.757343@w221.z064000254.bwi-md.dsl.cnc.net> I'd like to summarize the thread prompted by the compiler changes that implemented long-stated restrictions in the ref manual and ask a related question about backwards compatibility. The two changes were: 1. If a name is declared global in a function scope, it is an error to import with that name as a target. Example: def foo(): global string import string # error 2. It is illegal to use 'from ... import *' in a function. Example: def foo(): from string import * I believe Guido's recommendation about these two rules are: 1. Allow it, even though it dodgy style. A two-stager would be clearer: def foo(): global string import string as string_mod string = string_mod 2. Keep the restriction, because it's really bad style. It can also cause subtle problems with nested scopes. Example: def f(): from string import * def g(): return strip .... It might be reasonable to expect that strip would refer to the binding introduced by "from string import *" but there is no reasonable way to support this. The other issue raised was the two extra arguments to new.code(). I'll move those to the end and make them optional. The related question is whether I should worry about backwards compatibility at the C level. PyFrame_New(), PyFunction_New(), and PyCode_New() all have different signatures. Should I do anything about this? Jeremy From pedroni at inf.ethz.ch Thu Feb 1 02:42:08 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Thu, 1 Feb 2001 02:42:08 +0100 Subject: [Python-Dev] weak refs and jython Message-ID: <004101c08bf0$3158f7e0$de5821c0@newmexico> [Maybe this a 2nd copy of the message, sorry] Hi. [Fred L. Drake, Jr.] > > Java weak refs cannot be resurrected. > > This is certainly annoying. > > How about this: the callback receives the weak reference object or > > proxy which it was registered on as a parameter. Since the reference > > has already been cleared, there's no way to get the object back, so we > > don't need to get it from Java either. > > Would that be workable? (I'm adjusting my patch now.) Yes, it is workable: clearly we can implement weak refs only under java2 but this is not (really) an issue. We can register the refs in a java reference queue, and poll it lazily or trough a low-priority thread in order to invoke the callbacks. -- Some remarks I have used java weak/soft refs to implement some of the internal tables of jython in order to avoid memory leaks, at least under java2. I imagine that the idea behind callbacks plus resurrection was to enable the construction of sofisticated caches. My intuition is that these features are not present under java because they will interfere too much with gc and have a performance penalty. On the other hand java offers reference queues and soft references, the latter cover the common case of caches that should be cleared when there is few memory left. (Never tried them seriously, so I don't know if the actual impl is fair, or will just wait too much starting to discard things => behavior like primitives gc). The main difference I see between callbacks and queues approach is that with queues is this left to the user when to do the actual cleanup of his tables/caches, and handling queues internally has a "low" overhead. With callbacks what happens depends really on the collection times/patterns and the overhead is related to call overhead and how much is non trivial, what the user put in the callbacks. Clearly general performance will not be easily predictable. (From a theoretical viewpoint one can simulate more or less queues with callbacks and the other way around). Resurrection makes few sense with queues, but I can easely see that lacking of both resurrection and soft refs limits what can be done with weak-like refs. Last thing: one of the things that is really missing in java refs features is that one cannot put conditions of the form as long A is not collected B should not be collected either. Clearly I'm referring to situation when one cannot modify the class of A in order to add a field, which is quite typical in java. This should not be a problem with python and its open/dynamic way-of-life. regards, Samuele Pedroni. > From ping at lfw.org Thu Feb 1 12:31:33 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 1 Feb 2001 03:31:33 -0800 (PST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include - really begs for a PEP In-Reply-To: <14968.16962.830739.920771@anthem.wooz.org> Message-ID: 
                              
                              Moshe Zadka wrote: > Basic response: I *love* the iter(), sq_iter and __iter__ > parts. I tremble at seeing the rest. Why not add a method to > dictionaries .iteritems() and do > > for (k, v) in dict.iteritems(): > pass > > (dict.iteritems() would return an an iterator to the items) Barry Warsaw wrote: > Moshe, I had exactly the same reaction and exactly the same idea. I'm > a strong -1 on introducing new syntax for this when new methods can > handle it in a much more readable way (IMO). I remember considering this solution when i was writing the PEP. The problem with it is that it isn't backward-compatible. It won't work on existing dictionary-like objects -- it just introduces another method that we then have to go back and implement on everything, which kind of defeats the point of the whole proposal. (One of the Big Ideas is to let the 'for' syntax mean "just do whatever you have to do to iterate" and we let it worry about the details.) The other problem with this is that it isn't feasible in practice unless 'for' can magically detect when the thing is a sequence and when it's an iterator. I don't see any obvious solution to this (aside from "instead of an iterator, implement a whole sequence-like object using the __getitem__ protocol" -- and then we'd be back to square one). I personally find this: for key:value in dict: much clearer than either of these: for (k, v) in dict.iteritems(): for key, value in dict.iterator(ITEMS): There's less to read and less punctuation in the first, and there's a natural parallel: seq = [1, 4, 7] for item in seq: ... dict = {2:3, 4:5} for key:value in dict: ... -- ?!ng Two links diverged in a Web, and i -- i took the one less travelled by. -- with apologies to Robert Frost From thomas at xs4all.net Thu Feb 1 08:55:01 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Thu, 1 Feb 2001 08:55:01 +0100 Subject: [Python-Dev] Re: rethinking import-related syntax errors In-Reply-To: <14968.44923.774323.757343@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Wed, Jan 31, 2001 at 07:36:11PM -0500 References: 
                              
                              <20010130075515.X962@xs4all.nl> <200101301506.KAA25763@cj20424-a.reston1.va.home.com> <20010130165204.I962@xs4all.nl> <200101302042.PAA29301@cj20424-a.reston1.va.home.com> <14968.44923.774323.757343@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010201085501.K922@xs4all.nl> On Wed, Jan 31, 2001 at 07:36:11PM -0500, Jeremy Hylton wrote: > I believe Guido's recommendation about these two rules are: > 1. Allow it, even though it dodgy style. A two-stager would be > clearer: > def foo(): > global string > import string as string_mod > string = string_mod I don't think it's dodgy style, and I don't think a two-stager would be clearer, since the docs always claim 'importing is just another assignment statement'. The whole 'import-as' was added to *avoid* these two-stagers! Furthermore, since 'global string;import string' worked correctly at least since Python 1.5 and probably much longer, I suspect it'll break some code and confuse some more programmers out there. To handle this 'portably' (between Python versions, because lets be honest: Python 2.0 is far from common right now, and I can't blame people for not upgrading with the licence issues and all), the programmer would have to do def assign_global_string(name): global string string = name def foo(): import string assign_global_string(name) or even def foo(): def assign_global_string(name): global string string = name import string assign_global_string(name) (Keeping in mind nested scopes, what would *you* expect the last one to do ?) I honestly think def foo(): global string import string is infinitely clearer. > 2. Keep the restriction, because it's really bad style. It can > also cause subtle problems with nested scopes. Example: > def f(): > from string import * > def g(): > return strip > .... > It might be reasonable to expect that strip would refer to the > binding introduced by "from string import *" but there is no > reasonable way to support this. I'm still not entirely comfortable with disallowing this (rewriting code that uses it would be a pain, especially large functions) but I have good hopes that this won't be necessary because nothing large uses this :) Still, it would be nice if the compiler would only barf if someone uses 'from ... import *' in a local scope *and* references unbound names in a nested scope. I can see how that would be a lot of trouble for a little bit of gain, though. > The related question is whether I should worry about backwards > compatibility at the C level. PyFrame_New(), PyFunction_New(), and > PyCode_New() all have different signatures. Should I do anything > about this? Well, it could be done, maybe renaming the functions and doing something like #ifdef OLD_CODE_CREATION #define PyFrame_New PyFrame_OldNew ... etc, to allow quick porting to Python 2.1. I have never seen C code create code/function/frame objects by itself, though, so I'm not sure if it's worth it. The Python bit is, since it's a lot less trouble to fix it and a lot more common to use the 'new' object. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From fdrake at acm.org Thu Feb 1 18:08:49 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 1 Feb 2001 12:08:49 -0500 (EST) Subject: [Python-Dev] Re: 2.1a2 release issues; mail.python.org still down In-Reply-To: <3A798F14.D389A4A9@lemburg.com> References: <14969.31398.706875.540775@w221.z064000254.bwi-md.dsl.cnc.net> <3A798F14.D389A4A9@lemburg.com> Message-ID: <14969.38945.771075.55993@cj42289-a.reston1.va.home.com> [Pushing this to python-dev w/out M-A's permission, now that mail is starting to flow again.] M.-A. Lemburg writes: > Another issue: importing old extensions now causes a core dump > due to the new slots for weak refs beind written to. I think(!) this should only affect really modules from 1.5.? and earlier; type objects compiled after tp_xxx7/tp_xxx8 were added *should not* have a problem with this. You don't give enough information for me to be sure. Please let me know more if I'm wrong (possible!). The only way I can see that there would be a problem like this is if the type object contains a positive value for the tp_weaklistoffset field (formerly tp_xxx8). > Solution: in addition to printing a warning, the _PyModule_Init() > APIs should ignore all modules having an API level < 1010. For the specific problem you mention, we could add a type flag (Py_TPFLAGS_HAVE_WEAKREFS) that could be tested; it would be set in Py_TPFLAGS_DEFAULT. On the other hand, I'd be perfectly happy to "ignore" modules with the older C API version (especially if "ignore" lets me call Py_FatalError()!). The API version changed because of the changes to the function signatures of PyCode_New() and PyFrame_New(); these both require additional parameters in API version 1010. -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From skip at mojam.com Thu Feb 1 18:33:32 2001 From: skip at mojam.com (Skip Montanaro) Date: Thu, 1 Feb 2001 11:33:32 -0600 (CST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: 
                              
                              References: <14968.37210.886842.820413@beluga.mojam.com> 
                              
                              Message-ID: <14969.40428.977831.274322@beluga.mojam.com> >> What would break if we decided to simply add __getitem__ (and other >> sequence methods) to list object's method table? Ping> That would work for lists, but not for any extension types that Ping> use the sq_* protocol to behave like sequences. Could extension writers add those methods to their modules? I know I'm really getting off-topic here, but the whole visible interface idea crops up from time-to-time. I guess I'm just nibbling around the edges a bit to try and understand the problem better. Skip From jeremy at alum.mit.edu Thu Feb 1 20:04:10 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 1 Feb 2001 14:04:10 -0500 (EST) Subject: [Python-Dev] insertdict slower? Message-ID: <14969.45866.143924.870843@w221.z064000254.bwi-md.dsl.cnc.net> I was curious about what the DictCreation microbenchmark in pybench was slower (about 15%) with 2.1 than with 2.0. I ran both with profiling enabled (-pg, no -O) and see that insertdict is a fair bit slower in 2.1. Anyone with dict implementation expertise want to hazard a guess about this? The profiler indicates the insertdict() is about 30% slower in 2.1, when the keys are all ints. int_hash() isn't any slower, but dict_ass_sub() is about 50% slower. Of course, this is a microbenchmark that focuses on one tiny corner of dictionary usage: creating dictionaries with integer keys. This may not be a very useful measure of dictionary performance. Jeremy Results for Python 2.0 Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 54.55 3.90 3.90 285 13.68 19.25 eval_code2 6.71 4.38 0.48 4500875 0.00 0.00 lookdict 5.17 4.75 0.37 3000299 0.00 0.00 dict_dealloc 5.03 5.11 0.36 4506429 0.00 0.00 PyDict_SetItem 3.78 5.38 0.27 4500170 0.00 0.00 PyObject_SetItem 2.94 5.59 0.21 1500670 0.00 0.00 dictresize 2.80 5.79 0.20 4513037 0.00 0.00 insertdict 2.52 5.97 0.18 3000333 0.00 0.00 PyDict_New 2.38 6.14 0.17 4510126 0.00 0.00 PyObject_Hash 2.38 6.31 0.17 4500459 0.00 0.00 int_hash 2.24 6.47 0.16 3006844 0.00 0.00 gc_list_append 2.10 6.62 0.15 4500115 0.00 0.00 dict_ass_sub 1.68 6.74 0.12 3006759 0.00 0.00 gc_list_remove 1.68 6.86 0.12 3001745 0.00 0.00 PyObject_Init 1.26 6.95 0.09 3005413 0.00 0.00 _PyGC_Insert Results for Python 2.1 Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 50.00 3.83 3.83 998 3.84 3.84 eval_code2 6.40 4.32 0.49 4520965 0.00 0.00 lookdict 4.70 4.68 0.36 4519083 0.00 0.00 PyDict_SetItem 4.70 5.04 0.36 3001756 0.00 0.00 dict_dealloc 4.18 5.36 0.32 4500441 0.00 0.00 PyObject_SetItem 3.39 5.62 0.26 4531084 0.00 0.00 insertdict 3.00 5.85 0.23 4500354 0.00 0.00 dict_ass_sub 2.48 6.04 0.19 4507608 0.00 0.00 int_hash 2.35 6.22 0.18 4576793 0.00 0.00 PyObject_Hash 2.22 6.39 0.17 3003590 0.00 0.00 PyObject_Init 2.22 6.56 0.17 3002045 0.00 0.00 PyDict_New 2.22 6.73 0.17 1502861 0.00 0.00 dictresize 1.96 6.88 0.15 3023157 0.00 0.00 gc_list_remove 1.70 7.01 0.13 3020996 0.00 0.00 _PyGC_Remove 1.57 7.13 0.12 3023452 0.00 0.00 gc_list_append From mal at lemburg.com Thu Feb 1 18:43:52 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 01 Feb 2001 18:43:52 +0100 Subject: [Python-Dev] Re: 2.1a2 release issues; mail.python.org still down References: <14969.31398.706875.540775@w221.z064000254.bwi-md.dsl.cnc.net> <3A798F14.D389A4A9@lemburg.com> <14969.38945.771075.55993@cj42289-a.reston1.va.home.com> Message-ID: <3A79A058.772239C2@lemburg.com> "Fred L. Drake, Jr." wrote: > > M.-A. Lemburg writes: > > Another issue: importing old extensions now causes a core dump > > due to the new slots for weak refs beind written to. > > I think(!) this should only affect really modules from 1.5.? and > earlier; type objects compiled after tp_xxx7/tp_xxx8 were added > *should not* have a problem with this. You don't give enough > information for me to be sure. Please let me know more if I'm wrong > (possible!). I've only tested these using my mx tools compiled against 1.5 -- really old, I know, but I still actively use that version. tp_xxx7/8 were added in Python 1.5.2, I think, so writing to them causes the core dump. > The only way I can see that there would be a problem like this is if > the type object contains a positive value for the tp_weaklistoffset > field (formerly tp_xxx8). > > > Solution: in addition to printing a warning, the _PyModule_Init() > > APIs should ignore all modules having an API level < 1010. > > For the specific problem you mention, we could add a type flag > (Py_TPFLAGS_HAVE_WEAKREFS) that could be tested; it would be set in > Py_TPFLAGS_DEFAULT. That would work, but is it really worth it ? The APIs have changed considerably, so the fact that I got away with a warning in Python2.0 doesn't really mean anything -- I do have a problem now, though, since maintaining versions for 1.5, 1.5.2, 2.0 and 2.1 will be a pain :-/ > On the other hand, I'd be perfectly happy to "ignore" modules with > the older C API version (especially if "ignore" lets me call > Py_FatalError()!). The API version changed because of the changes to > the function signatures of PyCode_New() and PyFrame_New(); these both > require additional parameters in API version 1010. Py_FatalError() is a bit too harsh, I guess. Wouldn't it suffice to raise an ImportError exception and have Py_InitModule() return NULL in case a module with an incompatible API version is encountered ? BTW, what happened to the same problem on Windows ? Do users still get a seg fault ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From fdrake at acm.org Thu Feb 1 18:48:48 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 1 Feb 2001 12:48:48 -0500 (EST) Subject: [Python-Dev] Re: 2.1a2 release issues; mail.python.org still down In-Reply-To: <3A79A058.772239C2@lemburg.com> References: <14969.31398.706875.540775@w221.z064000254.bwi-md.dsl.cnc.net> <3A798F14.D389A4A9@lemburg.com> <14969.38945.771075.55993@cj42289-a.reston1.va.home.com> <3A79A058.772239C2@lemburg.com> Message-ID: <14969.41344.176815.821673@cj42289-a.reston1.va.home.com> M.-A. Lemburg writes: > I've only tested these using my mx tools compiled against 1.5 -- > really old, I know, but I still actively use that version. tp_xxx7/8 > were added in Python 1.5.2, I think, so writing to them causes > the core dump. Yep. I said: > For the specific problem you mention, we could add a type flag > (Py_TPFLAGS_HAVE_WEAKREFS) that could be tested; it would be set in > Py_TPFLAGS_DEFAULT. M-A replied: > That would work, but is it really worth it ? The APIs have changed > considerably, so the fact that I got away with a warning in Python2.0 No, which is why I'm happy to tell you to recomple your extensions. > doesn't really mean anything -- I do have a problem now, though, > since maintaining versions for 1.5, 1.5.2, 2.0 and 2.1 will > be a pain :-/ Unless you're using PyCode_New() or PyFrame_New(), recompiling the extension should be all you'll need -- unless you're pulling stunts like ExtensionClass does (defining a type-like object using an old definition of PyTypeObject). If any of the functions you're calling have changed signatures, you'll need to update them anyway. The weakref support will not cause you to change your code unless you want to be able to refer to your extension types via weak refs. > Py_FatalError() is a bit too harsh, I guess. Wouldn't it > suffice to raise an ImportError exception and have Py_InitModule() > return NULL in case a module with an incompatible API version is > encountered ? I suppose we could do that, but it'll take more than my agreement to make that happen. Guido seemed to think that few modules will be calling PyCode_New() and PyFrame_New() directly (pyexpat being the exception). -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From esr at thyrsus.com Thu Feb 1 19:00:57 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Thu, 1 Feb 2001 13:00:57 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include - really begs for a PEP In-Reply-To: <200101312321.MAA03263@s454.cosc.canterbury.ac.nz>; from greg@cosc.canterbury.ac.nz on Thu, Feb 01, 2001 at 12:21:04PM +1300 References: <14968.16962.830739.920771@anthem.wooz.org> <200101312321.MAA03263@s454.cosc.canterbury.ac.nz> Message-ID: <20010201130057.A12500@thyrsus.com> Greg Ewing 
                              
                              : > Yuck. I don't like any of this "for x in y.iterator_something()" > stuff. The things you're after aren't "in" the iterator, they're > "in" the dict. I don't want to know that there are iterators > involved. I must say I agree. Having explicit iterators obfuscates what is going on, rather than clarifying it -- the details of how we get the next item should be hidden as far below the surface of the code as possible, so programmers don't have to think about them. The only cases I know of where an explicit iterator object is even semi-justified are those where there is substantial control state to be kept around between iterations and that state has to be visible to the application code (not the case with dictionaries or any other built-in type). In the cases where that *is* true (interruptible tree traversal being the paradigm example), we would be better served with Icon-style generators or a continuations facility a la Stackless Python. I'm a hard -1 on explicit iterator objects for built-in types. Let's keep it simple, guys. -- 
                              Eric S. Raymond The Constitution is not neutral. It was designed to take the government off the backs of the people. -- Justice William O. Douglas From mal at lemburg.com Thu Feb 1 19:05:22 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 01 Feb 2001 19:05:22 +0100 Subject: [Python-Dev] Benchmarking "fun" (was Re: Python 2.1 slower than 2.0) References: 
                              
                              
                              <3A78226B.2E177EFE@lemburg.com> <20010131220033.O962@xs4all.nl> Message-ID: <3A79A562.54682A39@lemburg.com> Thomas Wouters wrote: > > On Wed, Jan 31, 2001 at 03:34:19PM +0100, M.-A. Lemburg wrote: > > > I have made similar experience with -On with n>3 compared to -O2 > > using pgcc (gcc optimized for PC processors). BTW, the Linux > > kernel uses "-Wall -Wstrict-prototypes -O3 -fomit-frame-pointer" > > as CFLAGS -- perhaps Python should too on Linux ?! > > [...lots of useful tips about gcc compiler options...] Thanks for the useful details, Thomas. I guess on PC machines, -fomit-frame-pointer does have some use due to the restricted number of available registers. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Thu Feb 1 19:15:24 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 01 Feb 2001 19:15:24 +0100 Subject: [Python-Dev] Re: 2.1a2 release issues; mail.python.org still down References: <14969.31398.706875.540775@w221.z064000254.bwi-md.dsl.cnc.net> <3A798F14.D389A4A9@lemburg.com> <14969.38945.771075.55993@cj42289-a.reston1.va.home.com> <3A79A058.772239C2@lemburg.com> <14969.41344.176815.821673@cj42289-a.reston1.va.home.com> Message-ID: <3A79A7BC.58997544@lemburg.com> "Fred L. Drake, Jr." wrote: > > M.-A. Lemburg writes: > > I've only tested these using my mx tools compiled against 1.5 -- > > really old, I know, but I still actively use that version. tp_xxx7/8 > > were added in Python 1.5.2, I think, so writing to them causes > > the core dump. > > Yep. > > I said: > > For the specific problem you mention, we could add a type flag > > (Py_TPFLAGS_HAVE_WEAKREFS) that could be tested; it would be set in > > Py_TPFLAGS_DEFAULT. > > M-A replied: > > That would work, but is it really worth it ? The APIs have changed > > considerably, so the fact that I got away with a warning in Python2.0 > > No, which is why I'm happy to tell you to recomple your extensions. > > > doesn't really mean anything -- I do have a problem now, though, > > since maintaining versions for 1.5, 1.5.2, 2.0 and 2.1 will > > be a pain :-/ > > Unless you're using PyCode_New() or PyFrame_New(), recompiling the > extension should be all you'll need -- unless you're pulling stunts > like ExtensionClass does (defining a type-like object using an old > definition of PyTypeObject). If any of the functions you're calling > have changed signatures, you'll need to update them anyway. The > weakref support will not cause you to change your code unless you want > to be able to refer to your extension types via weak refs. The problem is not recompiling the extensions, it's that I will have to keep compiled versions around for all versions I have installed on my machine. > > Py_FatalError() is a bit too harsh, I guess. Wouldn't it > > suffice to raise an ImportError exception and have Py_InitModule() > > return NULL in case a module with an incompatible API version is > > encountered ? > > I suppose we could do that, but it'll take more than my agreement to > make that happen. Guido seemed to think that few modules will be > calling PyCode_New() and PyFrame_New() directly (pyexpat being the > exception). The warnings are at least as annoying as recompiling the extensions, even more since each and every imported extension will moan about the version difference ;-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From ping at lfw.org Thu Feb 1 19:21:12 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 1 Feb 2001 10:21:12 -0800 (PST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <200101312359.MAA03278@s454.cosc.canterbury.ac.nz> Message-ID: 
                              
                              On Thu, 1 Feb 2001, Greg Ewing wrote: > Guido van Rossum 
                              
                              : > > > But it *is* true that coroutines are a very attractice piece of land > > "just nextdoor". > > Unfortunately there's a big high fence in between topped with > barbed wire and patrolled by vicious guard dogs. :-( Perhaps you meant, lightly killed and topped with quintuple-smooth, treble milk chocolate? :) -- ?!ng "PS: tongue is firmly in cheek PPS: regrettably, that's my tongue in my cheek" -- M. H. From sdm7g at virginia.edu Thu Feb 1 20:22:35 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Thu, 1 Feb 2001 14:22:35 -0500 (EST) Subject: [Python-Dev] Case sensitive import. Message-ID: 
                              
                              I see from one of the comments on my patch #103459 that there is a history to this issue (patch #103154) I had assumed that renaming modules and possibly breaking existing code was not an option, but this seems to have been considered in the discussion on that earlier patch. Is there any consensus on how to deal with this ? I would *really* like to get SOME fix -- either my patch, or a renaming of FCNTL, TERMIOS, SOCKET, into the next release. It's not clear to me whether the issues on other systems are the same. On mac-osx, the OS is BSD unix based and when using a unix file system, it's case sensitive. But the standard filesystem is Apple's HFS+, which is case preserving but case insensitive. ( That means that opening "abc" will succeed if there is a file named "abc", "ABC", "Abc" , "aBc" ... , but a directory listing will show "abc" ) I had guessed that the CHECK_IMPORT_CASE ifdefs and the corresponding configure switch were there for this sort of problem, and all I had to do was add a macosx implementation of check_case(), but returning false from check_case causes the search to fail -- it does not continue until it find a matching module. So it appears that I don't understand the issues on other platforms and what CHECK_IMPORT_CASE intends to fix. -- Steve Majewski From jeremy at alum.mit.edu Thu Feb 1 20:27:45 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 1 Feb 2001 14:27:45 -0500 (EST) Subject: [Python-Dev] python setup.py fails with illegal import (+ fix) In-Reply-To: <20010131200507.A106931E1AD@bireme.oratrix.nl> References: <20010131200507.A106931E1AD@bireme.oratrix.nl> Message-ID: <14969.47281.950974.882075@w221.z064000254.bwi-md.dsl.cnc.net> I checked in a different fix last night, which you have probably discovered now that python-dev is sending mail again. Jeremy From fdrake at acm.org Thu Feb 1 20:51:33 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 1 Feb 2001 14:51:33 -0500 (EST) Subject: [Python-Dev] any opinion on 'make quicktest'? In-Reply-To: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> References: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14969.48709.111307.650978@cj42289-a.reston1.va.home.com> Jeremy Hylton writes: > I run the regression test a lot. I have found that it is often useful > to exclude some of the slowest tests for most of the test runs and I think this would be nice. > + QUICKTESTOPTS= $(TESTOPTS) -x test_thread test_signal test_strftime \ > + test_unicodedata test_re test_sre test_select test_poll > + quicktest: all platform > + -rm -f $(srcdir)/Lib/test/*.py[co] > + -PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) > + PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) In fact, for this, I'd only run the test once and would skip the "rm" command as well. I usually just run the regression test once (but with all modules, to avoid the extra typing). -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From jeremy at alum.mit.edu Thu Feb 1 20:58:29 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 1 Feb 2001 14:58:29 -0500 (EST) Subject: [Python-Dev] any opinion on 'make quicktest'? In-Reply-To: <14969.48709.111307.650978@cj42289-a.reston1.va.home.com> References: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> <14969.48709.111307.650978@cj42289-a.reston1.va.home.com> Message-ID: <14969.49125.52032.638762@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "FLD" == Fred L Drake, 
                              
                              writes: >> + QUICKTESTOPTS= $(TESTOPTS) -x test_thread test_signal >> test_strftime \ >> + test_unicodedata test_re test_sre test_select test_poll >> + quicktest: all platform >> + -rm -f $(srcdir)/Lib/test/*.py[co] >> + -PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) >> + PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) FLD> In fact, for this, I'd only run the test once and would skip the FLD> "rm" command as well. I usually just run the regression test FLD> once (but with all modules, to avoid the extra typing). Actually, I think the rm is important. I've spent most of the last month running make test to check the compiler. Jeremy From fdrake at acm.org Thu Feb 1 20:56:47 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 1 Feb 2001 14:56:47 -0500 (EST) Subject: [Python-Dev] any opinion on 'make quicktest'? In-Reply-To: <14969.49125.52032.638762@w221.z064000254.bwi-md.dsl.cnc.net> References: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> <14969.48709.111307.650978@cj42289-a.reston1.va.home.com> <14969.49125.52032.638762@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14969.49023.323038.923328@cj42289-a.reston1.va.home.com> Jeremy Hylton writes: > Actually, I think the rm is important. I've spent most of the last > month running make test to check the compiler. Yeah, but you're a special case. ;-) That's fine -- it's still much better than running the long version every time. -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From barry at digicool.com Thu Feb 1 21:22:38 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Thu, 1 Feb 2001 15:22:38 -0500 Subject: [Python-Dev] any opinion on 'make quicktest'? References: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14969.50574.964108.822920@anthem.wooz.org> >>>>> "JH" == Jeremy Hylton 
                              
                              writes: JH> I run the regression test a lot. I have found that it is JH> often useful to exclude some of the slowest tests for most of JH> the test runs and then do a full test run before I commit JH> changes. Would anyone be opposed to a quicktest target in the JH> Makefile that supports this practice? There are a small JH> number of tests that each take at least 10 seconds to JH> complete. I'm strongly +1 on this, because I often run the test suite on an Insure'd executable. It takes a looonngg time for even the quick tests. -Barry From ping at lfw.org Thu Feb 1 17:58:43 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 1 Feb 2001 08:58:43 -0800 (PST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <14968.37210.886842.820413@beluga.mojam.com> Message-ID: 
                              
                              On Wed, 31 Jan 2001, Skip Montanaro wrote: > What would break if we decided to simply add __getitem__ (and other sequence > methods) to list object's method table? Would they foul something up or > would simply sit around quietly waiting for hasattr to notice them? That would work for lists, but not for any extension types that use the sq_* protocol to behave like sequences. For now, anyway, we're stuck with the two separate protocols whether we like it or not. -- ?!ng Two links diverged in a Web, and i -- i took the one less travelled by. -- with apologies to Robert Frost From thomas at xs4all.net Thu Feb 1 23:30:48 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Thu, 1 Feb 2001 23:30:48 +0100 Subject: [Python-Dev] any opinion on 'make quicktest'? In-Reply-To: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Thu, Feb 01, 2001 at 11:21:30AM -0500 References: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010201233048.R962@xs4all.nl> On Thu, Feb 01, 2001 at 11:21:30AM -0500, Jeremy Hylton wrote: > I run the regression test a lot. I have found that it is often useful > to exclude some of the slowest tests for most of the test runs and > then do a full test run before I commit changes. Would anyone be > opposed to a quicktest target in the Makefile that supports this > practice? There are a small number of tests that each take at least > 10 seconds to complete. Definately +1 here. On BSDI 4.0, which I try to test regularly, test_signal hangs (because of threading bugs in BSDI, nothing Python can solve) and test_select/test_poll either crash right away, or hang as well (same as with test_signal, but could be specific to the box I'm running it on.) So I've been forced to do it by hand. I'm not sure why I didn't automate it yet, but make quicktest would be very welcome :) > + QUICKTESTOPTS= $(TESTOPTS) -x test_thread test_signal test_strftime \ > + test_unicodedata test_re test_sre test_select test_poll -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From barry at digicool.com Thu Feb 1 23:35:25 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Thu, 1 Feb 2001 17:35:25 -0500 Subject: [Python-Dev] Benchmarking "fun" (was Re: Python 2.1 slower than 2.0) References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> Message-ID: <14969.58541.406274.212776@anthem.wooz.org> >>>>> "M" == M 
                              
                              writes: M> Or do we have a 2.1 feature freeze already ? Strictly speaking, there is no feature freeze until the first beta is released. -Barry From jeremy at alum.mit.edu Thu Feb 1 23:39:25 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 1 Feb 2001 17:39:25 -0500 (EST) Subject: [Python-Dev] Benchmarking "fun" (was Re: Python 2.1 slower than 2.0) In-Reply-To: <3A7890AB.69B893F9@lemburg.com> References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> Message-ID: <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "MAL" == M -A Lemburg 
                              
                              writes: MAL> Tim Peters wrote: >> >> [Michael Hudson] >> > ... Can anyone try this on Windows? Seeing as windows malloc >> > reputedly sucks, maybe the differences would be bigger. >> >> No time now (pymalloc is a non-starter for 2.1). Was tried in >> the past on Windows. Helped significantly. Unclear how much was >> simply due to exploiting the global interpreter lock, though. >> "Windows" is also a multiheaded beast (e.g., NT has very >> different memory performance characteristics than 95). MAL> We're still in alpha, no ? The last planned alpha is going to be released tonight or early tomorrow. I'm reluctant to add a large patch that I'm unfamiliar with in the last 24 hours before the release. MAL> Or do we have a 2.1 feature freeze already ? We aren't adding any major new features that haven't been PEPed. I'd like to see a PEP on this subject. Jeremy From greg at cosc.canterbury.ac.nz Thu Feb 1 23:45:02 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Fri, 02 Feb 2001 11:45:02 +1300 (NZDT) Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: 
                              
                              Message-ID: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> Tim Peters 
                              
                              : > The old type/class split: list is a type, and types spell their "method > tables" in ways that have little in common with how classes do it. Maybe as a first step towards type/class unification one day, we could add __xxx__ attributes to all the builtin types, and start to think of the method table as the definitive source of all methods, with the tp_xxx slots being a sort of cache for the most commonly used ones. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Fri Feb 2 07:44:58 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 01:44:58 -0500 Subject: [Python-Dev] Showstopper in import? Message-ID: 
                              
                              Turns out IDLE no longer runs. Starting at line 88 of Tools/idle/EditorWindow.py we have this class defn: class EditorWindow: from Percolator import Percolator from ColorDelegator import ColorDelegator from UndoDelegator import UndoDelegator from IOBinding import IOBinding import Bindings from Tkinter import Toplevel from MultiStatusBar import MultiStatusBar about_title = about_title ... This leads to what looks like a bug (if we're to believe the error msg, which doesn't mean what it says): C:\Pyk>python tools/idle/idle.pyw Traceback (most recent call last): File "tools/idle/idle.pyw", line 2, in ? import idle File "C:\PYK\Tools\idle\idle.py", line 11, in ? import PyShell File "C:\PYK\Tools\idle\PyShell.py", line 15, in ? from EditorWindow import EditorWindow, fixwordbreaks File "C:\PYK\Tools\idle\EditorWindow.py", line 88, in ? class EditorWindow: File "C:\PYK\Tools\idle\EditorWindow.py", line 90, in EditorWindow from Percolator import Percolator SyntaxError: 'from ... import *' may only occur in a module scope Hit return to exit... C:\Pyk> Sorry for the delay in reporting this! I've had other problems with the Windows installer (all fixed now), and IDLE *normally* executes pythonw.exe on Windows, which tosses error msgs into a bit bucket. So all I knew was that IDLE "didn't come up", and took the high-probability guess that it was due to some other problem I was already tracking down. Lost that bet. From tim.one at home.com Fri Feb 2 07:47:59 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 01:47:59 -0500 Subject: [Python-Dev] Quick Unix work needed Message-ID: 
                              
                              Trent Mick's C API testing framework has been checked in, along with everything needed to get it working on Windows: http://sourceforge.net/patch/?func=detailpatch&patch_id=101162& group_id=5470 It still needs someone to add it to the Unixish builds. You'll know that it worked if the new std test test_capi.py succeeds. From RoD at qnet20.com Thu Feb 1 23:23:59 2001 From: RoD at qnet20.com (Rod) Date: Thu, 1 Feb 2001 23:23:59 Subject: [Python-Dev] Diamond x Jungle Carpet Python Message-ID: <20010202072422.6B673F4DD@mail.python.org> I have several Diamond x Jungle Capret Pythons for SALE. Make me an offer.... Go to: www.qnet20.com From tim.one at home.com Fri Feb 2 08:34:07 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 02:34:07 -0500 Subject: [Python-Dev] insertdict slower? Message-ID: 
                              
                              [Jeremy] > I was curious about what the DictCreation microbenchmark in > pybench was slower (about 15%) with 2.1 than with 2.0. I ran > both with profiling enabled (-pg, no -O) and see that insertdict > is a fair bit slower in 2.1. Anyone with dict implementation > expertise want to hazard a guess about this? You don't need to be an expert for this one: just look at the code! There's nothing to it, and not even a comment has changed in insertdict since 2.0. I don't believe the profile. There are plenty of other things to be suspicious about too (e.g., it showed 285 calls to eval_code2 in 2.0, but 998 in 2.1). So you're looking at a buggy profiler, a buggy profiling procedure, or a Cache Mystery (the catch-all excuse for anything that's incomprehensible without HW-level monitoring tools). WRT the latter, try inserting a renamed copy of insertdict before and after the existing one, and make them extern to discourage the compiler+linker from throwing them away. If the slowdown goes away, you're probably looking at an i-cache conflict accident. From tim.one at home.com Fri Feb 2 09:39:40 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 03:39:40 -0500 Subject: [Python-Dev] Case sensitive import Message-ID: 
                              
                              [Steven D. Majewski] > ... > Is there any consensus on how to deal with this ? No, else it would have been done already. > ... > So it appears that I don't understand the issues on other > platforms and what CHECK_IMPORT_CASE intends to fix. It started on Windows. The belief was that people (not developers -- your personal testimony doesn't count, and neither does mine <0.3 wink>) on case-insensitive file systems don't pay much attention to the case of names they type. So the belief was (perhaps it even happened -- I wasn't paying attention at the time, since I was a Unix Dweeb then) people would carelessly write, e.g., import String and then pick up some accidental String.py module instead of the builtin "string" they intended. So Python started checking for case-match on Windows, and griping if the *first* module name Windows returns didn't match case exactly. OK, it's actually more complicated than that, because some network filesystems used on Windows actually changed all filenames to uppercase. So there's an exception made for that wart too. Anyway, looks like a blind guess to me whether this actually does anyone any good. For efficiency, it *does* stop at the first, so if the user typed import string *intending* to import String.py, they'd never hear about their mistake. So it doesn't really address the whole (putative) problem regardless. It only gripes if the first case-insensitive match on the path doesn't match exactly. However, *if* it makes sense on Windows, then it makes exactly as much sense on "the standard filesystem ... Apple's HFS+, which is case preserving but case insensitive" -- same deal as Windows. I see no reason to believe that non-developer users on Macs are going to be more case-savvy than on Windows (or is there a reason to believe that?). Another wart is that it's easy to create Python modules that import fine on Unix, but blow up if you try to run them on Windows (or HFS+). That sucks too, and isn't just theoretical (although in practice it's a lot less common than tracking down binary files opened in text mode!). The Cygwin people have a related problem: they *are* trying to emulate Unix, but doing so on a Windows box, so, umm, enjoy the best of all worlds. I'd rather see the same rule used everywhere (keep going until finding an exact match), and tough beans to the person who writes import String on Windows (or Mac) intending "string". Windows probably still needs a unique wart to deal with case-destroying network filesystems, though. It's still terrible style to *rely* on case-sensitivity in file names, and all such crap should be purged from the Python distribution regardless. guido-will-agree-with-exactly-one-of-these-claims
                              
                              -ly y'rs - tim From mal at lemburg.com Fri Feb 2 10:01:34 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 10:01:34 +0100 Subject: [Python-Dev] Showstopper in import? References: 
                              
                              Message-ID: <3A7A776E.6ECC626E@lemburg.com> Tim Peters wrote: > > Turns out IDLE no longer runs. Starting at line 88 of > Tools/idle/EditorWindow.py we have this class defn: > > class EditorWindow: > > from Percolator import Percolator > from ColorDelegator import ColorDelegator > from UndoDelegator import UndoDelegator > from IOBinding import IOBinding > import Bindings > from Tkinter import Toplevel > from MultiStatusBar import MultiStatusBar > > about_title = about_title > ... > > This leads to what looks like a bug (if we're to believe the error msg, > which doesn't mean what it says): > > C:\Pyk>python tools/idle/idle.pyw > Traceback (most recent call last): > File "tools/idle/idle.pyw", line 2, in ? > import idle > File "C:\PYK\Tools\idle\idle.py", line 11, in ? > import PyShell > File "C:\PYK\Tools\idle\PyShell.py", line 15, in ? > from EditorWindow import EditorWindow, fixwordbreaks > File "C:\PYK\Tools\idle\EditorWindow.py", line 88, in ? > class EditorWindow: > File "C:\PYK\Tools\idle\EditorWindow.py", line 90, in EditorWindow > from Percolator import Percolator > SyntaxError: 'from ... import *' may only occur in a module scope > Hit return to exit... I have already reported this to Jeremy. There are other instances of 'from x import *' in function and class scope too, e.g. some test() functions in the standard dist do this. I am repeating myself here, but I think that this single change will cause so many people to find their scripts are failing that it is really not worth it. Better issue a warning than raise an exception here ! -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From jack at oratrix.nl Fri Feb 2 10:45:34 2001 From: jack at oratrix.nl (Jack Jansen) Date: Fri, 02 Feb 2001 10:45:34 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _testmodule.c,NONE,1.1 In-Reply-To: Message by Tim Peters 
                              
                              , Thu, 01 Feb 2001 21:57:17 -0800 , 
                              
                              Message-ID: <20010202094535.7582E373C95@snelboot.oratrix.nl> Is "_test" a good choice of name for this module? It feels a bit too generic, isn't something like _test_api (or _test_python_c_api) better? -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm From tim.one at home.com Fri Feb 2 10:50:36 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 04:50:36 -0500 Subject: [Python-Dev] Showstopper in import? In-Reply-To: <3A7A776E.6ECC626E@lemburg.com> Message-ID: 
                              
                              [M.-A. Lemburg] > I have already reported this to Jeremy. There are other instances > of 'from x import *' in function and class scope too, e.g. > some test() functions in the standard dist do this. But there are no instances of "from x import *" in the case I reported, despite that the error msg (erroneously!) claimed there was. It's complaining about from Percolator import Percolator in a class definition. That smells like a bug, not a debatable design choice. > I am repeating myself here, but I think that this single change > will cause so many people to find their scripts are failing > that it is really not worth it. Provided the case above is fixed, IDLE will indeed fail to compile anyway, because Guido does from Tkinter import * inside several functions. But that's a different problem. > Better issue a warning than raise an exception here ! If Jeremy can't generate correct code, a warning is too weak. From mal at lemburg.com Fri Feb 2 11:00:28 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 11:00:28 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A7A853C.B38C1DF5@lemburg.com> Jeremy Hylton wrote: > > >>>>> "MAL" == M -A Lemburg 
                              
                              writes: > > MAL> Tim Peters wrote: > >> > >> [Michael Hudson] > >> > ... Can anyone try this on Windows? Seeing as windows malloc > >> > reputedly sucks, maybe the differences would be bigger. > >> > >> No time now (pymalloc is a non-starter for 2.1). Was tried in > >> the past on Windows. Helped significantly. Unclear how much was > >> simply due to exploiting the global interpreter lock, though. > >> "Windows" is also a multiheaded beast (e.g., NT has very > >> different memory performance characteristics than 95). > > MAL> We're still in alpha, no ? > > The last planned alpha is going to be released tonight or early > tomorrow. I'm reluctant to add a large patch that I'm unfamiliar with > in the last 24 hours before the release. > > MAL> Or do we have a 2.1 feature freeze already ? > > We aren't adding any major new features that haven't been PEPed. I'd > like to see a PEP on this subject. I don't see a PEP for your import patch either ;-) Seriously, I am viewing the addition of pymalloc during the alpha phase or even the betas as test for the usability of such an approach. If it fails, fine, then we take it out again. If nobody notices, great, then leave it in. There would be a need for a PEP if we need to discuss APIs, interfaces, etc. but all this has already been done by Valdimir a long time ago. He put much effort into getting the Python malloc macros to work in the intended way so that pymalloc only has exchange these macro definitions. I don't understand why we cannot take the risk of trying this out in an alpha version. Besides, Vladimir's malloc patch is opt-in: you have to compile Python using --with-pymalloc to enable it, so it doesn't really harm anyone not knowing what he/she is doing. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Fri Feb 2 11:05:41 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 05:05:41 -0500 Subject: [Python-Dev] RE: [Python-checkins] CVS: python/dist/src/Modules _testmodule.c,NONE,1.1 In-Reply-To: <20010202094535.7582E373C95@snelboot.oratrix.nl> Message-ID: 
                              
                              [Jack Jansen] > Is "_test" a good choice of name for this module? It feels a bit > too generic, isn't something like _test_api (or _test_python_c_api) > better? If someone feels strongly about that (I don't), feel free to change the name for 2.1b1. From mal at lemburg.com Fri Feb 2 11:08:16 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 11:08:16 +0100 Subject: [Python-Dev] Showstopper in import? References: 
                              
                              Message-ID: <3A7A8710.D8A51718@lemburg.com> Tim Peters wrote: > > [M.-A. Lemburg] > > I have already reported this to Jeremy. There are other instances > > of 'from x import *' in function and class scope too, e.g. > > some test() functions in the standard dist do this. > > But there are no instances of "from x import *" in the case I reported, > despite that the error msg (erroneously!) claimed there was. It's > complaining about > > from Percolator import Percolator > > in a class definition. That smells like a bug, not a debatable design > choice. Percolator has "from x import *" code. This is what is causing the exception. I think it has already been fixed in CVS though, so should work again. > > I am repeating myself here, but I think that this single change > > will cause so many people to find their scripts are failing > > that it is really not worth it. > > Provided the case above is fixed, IDLE will indeed fail to compile anyway, > because Guido does > > from Tkinter import * > > inside several functions. But that's a different problem. How is it different ? Even though I agree that "from x import *" is bad style, it is quite common in testing code or code which imports a set of symbols from generated modules or modules containing only constants e.g. for protocols, error codes, etc. > > Better issue a warning than raise an exception here ! > > If Jeremy can't generate correct code, a warning is too weak. So this is the price we pay for having nested scopes... :-( -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Fri Feb 2 11:35:16 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 05:35:16 -0500 Subject: [Python-Dev] Showstopper in import? In-Reply-To: <3A7A8710.D8A51718@lemburg.com> Message-ID: 
                              
                              > Percolator has "from x import *" code. This is what is causing the > exception. Woo hoo! The traceback bamboozled me: it doesn't show any code from Percolator.py, just the import in EditorWindow.py. So I'll call *that* the bug <0.7 wink>. > I think it has already been fixed in CVS though, so should > work again. Doesn't work for me. If someone does patch Percolator.py, though, it will just blow up again at from IOBinding import IOBinding . Guido was apparently fond of this trick. > Even though I agree that "from x import *" > is bad style, it is quite common in testing code or code > which imports a set of symbols from generated modules or > modules containing only constants e.g. for protocols, error > codes, etc. I know I'm being brief, but please don't take that as disagreement. It's heading on 6 in the morning here and I've been plugging away at the release for a loooong time. I'm not in favor of banning "from x import *" if there's an alternative. But I don't grok the implementation issues in this area well enough right now to address it; I'm also hoping that Jeremy can, and much more quickly. >>> Better issue a warning than raise an exception here ! >> If Jeremy can't generate correct code, a warning is too weak. > So this is the price we pay for having nested scopes... :-( I don't know. It apparently is the state of the code at this instant. sleeping-on-it<0.1-wink>-ly y'rs - tim From mal at lemburg.com Fri Feb 2 12:38:07 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 12:38:07 +0100 Subject: [Python-Dev] Showstopper in import? References: 
                              
                              Message-ID: <3A7A9C1F.7A8619AE@lemburg.com> Tim Peters wrote: > > > Percolator has "from x import *" code. This is what is causing the > > exception. > > Woo hoo! The traceback bamboozled me: it doesn't show any code from > Percolator.py, just the import in EditorWindow.py. So I'll call *that* the > bug <0.7 wink>. > > > I think it has already been fixed in CVS though, so should > > work again. > > Doesn't work for me. If someone does patch Percolator.py, though, it will > just blow up again at > > from IOBinding import IOBinding > > . Guido was apparently fond of this trick. For completeness, here are all instance I've found in the standard dist: ./Tools/pynche/pyColorChooser.py: -- from Tkinter import * ./Tools/idle/IOBinding.py: -- from Tkinter import * ./Tools/idle/Percolator.py: -- from Tkinter import * > > Even though I agree that "from x import *" > > is bad style, it is quite common in testing code or code > > which imports a set of symbols from generated modules or > > modules containing only constants e.g. for protocols, error > > codes, etc. > > I know I'm being brief, but please don't take that as disagreement. It's > heading on 6 in the morning here and I've been plugging away at the release > for a loooong time. I'm not in favor of banning "from x import *" if > there's an alternative. But I don't grok the implementation issues in this > area well enough right now to address it; I'm also hoping that Jeremy can, > and much more quickly. > > >>> Better issue a warning than raise an exception here ! > > >> If Jeremy can't generate correct code, a warning is too weak. > > > So this is the price we pay for having nested scopes... :-( > > I don't know. It apparently is the state of the code at this instant. Ok, Good Night then :-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thomas at xs4all.net Fri Feb 2 13:06:54 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 2 Feb 2001 13:06:54 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: <3A7A853C.B38C1DF5@lemburg.com>; from mal@lemburg.com on Fri, Feb 02, 2001 at 11:00:28AM +0100 References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> Message-ID: <20010202130654.T962@xs4all.nl> On Fri, Feb 02, 2001 at 11:00:28AM +0100, M.-A. Lemburg wrote: > There would be a need for a PEP if we need to discuss APIs, > interfaces, etc. but all this has already been done by Valdimir > a long time ago. He put much effort into getting the Python > malloc macros to work in the intended way so that pymalloc only > has exchange these macro definitions. > I don't understand why we cannot take the risk of trying this > out in an alpha version. Besides, Vladimir's malloc patch > is opt-in: you have to compile Python using --with-pymalloc > to enable it, so it doesn't really harm anyone not knowing what > he/she is doing. +1 on putting it in, in alpha2 or beta1, on an opt-in basis. +0 on putting it in *now* (alpha2, not beta1) and on by default. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mal at lemburg.com Fri Feb 2 13:08:32 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 13:08:32 +0100 Subject: [Python-Dev] Quick Unix work needed References: 
                              
                              Message-ID: <3A7AA340.B3AFF106@lemburg.com> Tim Peters wrote: > > Trent Mick's C API testing framework has been checked in, along with > everything needed to get it working on Windows: > > http://sourceforge.net/patch/?func=detailpatch&patch_id=101162& > group_id=5470 > > It still needs someone to add it to the Unixish builds. Done. > You'll know that it worked if the new std test test_capi.py succeeds. The test passes just fine... nothing much there which could fail ;-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Fri Feb 2 13:14:33 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 13:14:33 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> Message-ID: <3A7AA4A9.56F54EFF@lemburg.com> Thomas Wouters wrote: > > On Fri, Feb 02, 2001 at 11:00:28AM +0100, M.-A. Lemburg wrote: > > > There would be a need for a PEP if we need to discuss APIs, > > interfaces, etc. but all this has already been done by Valdimir > > a long time ago. He put much effort into getting the Python > > malloc macros to work in the intended way so that pymalloc only > > has exchange these macro definitions. > > > I don't understand why we cannot take the risk of trying this > > out in an alpha version. Besides, Vladimir's malloc patch > > is opt-in: you have to compile Python using --with-pymalloc > > to enable it, so it doesn't really harm anyone not knowing what > > he/she is doing. > > +1 on putting it in, in alpha2 or beta1, on an opt-in basis. +0 on putting > it in *now* (alpha2, not beta1) and on by default. Anyone else for adding it now on an opt-in basis ? BTW, here is the URL to the pymalloc page: http://starship.python.net/~vlad/pymalloc/ -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mwh21 at cam.ac.uk Fri Feb 2 13:24:32 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 02 Feb 2001 12:24:32 +0000 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: "M.-A. Lemburg"'s message of "Fri, 02 Feb 2001 13:14:33 +0100" References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <3A7AA4A9.56F54EFF@lemburg.com> Message-ID: 
                              
                              "M.-A. Lemburg" 
                              
                              writes: > Thomas Wouters wrote: > > > > On Fri, Feb 02, 2001 at 11:00:28AM +0100, M.-A. Lemburg wrote: > > > > > There would be a need for a PEP if we need to discuss APIs, > > > interfaces, etc. but all this has already been done by Valdimir > > > a long time ago. He put much effort into getting the Python > > > malloc macros to work in the intended way so that pymalloc only > > > has exchange these macro definitions. > > > > > I don't understand why we cannot take the risk of trying this > > > out in an alpha version. Besides, Vladimir's malloc patch > > > is opt-in: you have to compile Python using --with-pymalloc > > > to enable it, so it doesn't really harm anyone not knowing what > > > he/she is doing. > > > > +1 on putting it in, in alpha2 or beta1, on an opt-in basis. +0 on putting > > it in *now* (alpha2, not beta1) and on by default. > > Anyone else for adding it now on an opt-in basis ? Yes. I also want to try adding it in and then scrapping the free list management done by ints, frames, etc. and seeing if it this results in any significant slowdown. Don't have time for another mega-benchmark just now though. Cheers, M. -- 3. Syntactic sugar causes cancer of the semicolon. -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html From fredrik at pythonware.com Fri Feb 2 13:22:13 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Fri, 2 Feb 2001 13:22:13 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <3A7AA4A9.56F54EFF@lemburg.com> Message-ID: <020501c08d12$c63c6b30$0900a8c0@SPIFF> mal wrote: > Anyone else for adding it now on an opt-in basis ? +1 from here. Cheers /F From thomas at xs4all.net Fri Feb 2 13:29:53 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 2 Feb 2001 13:29:53 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: 
                              
                              ; from mwh21@cam.ac.uk on Fri, Feb 02, 2001 at 12:24:32PM +0000 References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <3A7AA4A9.56F54EFF@lemburg.com> 
                              
                              Message-ID: <20010202132953.I922@xs4all.nl> On Fri, Feb 02, 2001 at 12:24:32PM +0000, Michael Hudson wrote: > > Anyone else for adding [pyobjmalloc] now on an opt-in basis ? > Yes. I also want to try adding it in and then scrapping the free list > management done by ints, frames, etc. and seeing if it this results in > any significant slowdown. Don't have time for another mega-benchmark > just now though. We could (and probably should) delay that for 2.2 anyway. Make pymalloc default on, and do some standardized benchmarking on a number of different platforms, with and without the typespecific freelists. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mwh21 at cam.ac.uk Fri Feb 2 13:39:08 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 02 Feb 2001 12:39:08 +0000 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: Thomas Wouters's message of "Fri, 2 Feb 2001 13:29:53 +0100" References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <3A7AA4A9.56F54EFF@lemburg.com> 
                              
                              <20010202132953.I922@xs4all.nl> Message-ID: 
                              
                              Thomas Wouters 
                              
                              writes: > On Fri, Feb 02, 2001 at 12:24:32PM +0000, Michael Hudson wrote: > > > > Anyone else for adding [pyobjmalloc] now on an opt-in basis ? > > > Yes. I also want to try adding it in and then scrapping the free list > > management done by ints, frames, etc. and seeing if it this results in > > any significant slowdown. Don't have time for another mega-benchmark > > just now though. > > We could (and probably should) delay that for 2.2 anyway. Uhh, yes. I meant to say that too. Must remember to finish my posts... > Make pymalloc default on, and do some standardized benchmarking on a > number of different platforms, with and without the typespecific > freelists. Yes. This will take time, but is worthwhile, IMHO. Cheers, M. -- C is not clean -- the language has _many_ gotchas and traps, and although its semantics are _simple_ in some sense, it is not any cleaner than the assembly-language design it is based on. -- Erik Naggum, comp.lang.lisp From moshez at zadka.site.co.il Fri Feb 2 13:55:55 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Fri, 2 Feb 2001 14:55:55 +0200 (IST) Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: <3A7AA4A9.56F54EFF@lemburg.com> References: <3A7AA4A9.56F54EFF@lemburg.com>, 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> Message-ID: <20010202125555.C81EDA840@darjeeling.zadka.site.co.il> On Fri, 02 Feb 2001 13:14:33 +0100, "M.-A. Lemburg" 
                              
                              wrote: > Anyone else for adding it now on an opt-in basis ? Add it on opt-out basis, and if it causes trouble, revert to opt-in in beta/final. Alphas are supposed to be buggy <0.7 wink> -- Moshe Zadka 
                              
                              This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From mwh21 at cam.ac.uk Fri Feb 2 14:15:14 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 02 Feb 2001 13:15:14 +0000 Subject: [Python-Dev] Showstopper in import? In-Reply-To: "Tim Peters"'s message of "Fri, 2 Feb 2001 05:35:16 -0500" References: 
                              
                              Message-ID: 
                              
                              "Tim Peters" 
                              
                              writes: > > Percolator has "from x import *" code. This is what is causing the > > exception. > > Woo hoo! The traceback bamboozled me: it doesn't show any code from > Percolator.py, just the import in EditorWindow.py. So I'll call *that* the > bug <0.7 wink>. > > > I think it has already been fixed in CVS though, so should > > work again. > > Doesn't work for me. If someone does patch Percolator.py, though, it will > just blow up again at > > from IOBinding import IOBinding > > . Guido was apparently fond of this trick. I apologise if I'm explaining things people already know here, but I can explain the wierdo tracebacks. Try this: >>> def f(): ... from string import * ... pass ... SyntaxError: 'from ... import *' may only occur in a module scope >>> you see? No traceback at all! This is a general feature of exceptions raised by the compiler (as opposed to the parser). >>> 21323124912094230491 OverflowError: integer literal too large >>> (also using some name other than "as" in an "import as" statement, invalid unicode \N{names}, various arglist nogos (eg. "def f(a=1,e):"), assigning to an expression, ... the list goes on & is getting longer). So what's happening is module A imports module B, which fails to copmile due to a non-module level "import *", but doesn't set up a traceback, so the traceback points at the import statement in module A. And as the exception message mentions import statements, everyone gets confused. The fix? Presumably rigging compile.c:com_error to set up tracebacks properly? It looks like it *tries* to, but I don't know this area of the code well enough to understand why it doesn't work. Anyone? Cheers, M. -- same software, different verbosity settings (this one goes to eleven) -- the effbot on the martellibot From thomas at xs4all.net Fri Feb 2 14:31:44 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 2 Feb 2001 14:31:44 +0100 Subject: [Python-Dev] Showstopper in import? In-Reply-To: 
                              
                              ; from mwh21@cam.ac.uk on Fri, Feb 02, 2001 at 01:15:14PM +0000 References: 
                              
                              
                              Message-ID: <20010202143144.U962@xs4all.nl> On Fri, Feb 02, 2001 at 01:15:14PM +0000, Michael Hudson wrote: [ Compiler exceptions (as opposed to runtime exceptions) sucks ] > The fix? Presumably rigging compile.c:com_error to set up tracebacks > properly? It looks like it *tries* to, but I don't know this area of > the code well enough to understand why it doesn't work. Anyone? Have you seen this ? http://sourceforge.net/patch/?func=detailpatch&patch_id=101782&group_id=5470 -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mwh21 at cam.ac.uk Fri Feb 2 14:37:39 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 02 Feb 2001 13:37:39 +0000 Subject: [Python-Dev] Showstopper in import? In-Reply-To: Thomas Wouters's message of "Fri, 2 Feb 2001 14:31:44 +0100" References: 
                              
                              
                              <20010202143144.U962@xs4all.nl> Message-ID: 
                              
                              Thomas Wouters 
                              
                              writes: > On Fri, Feb 02, 2001 at 01:15:14PM +0000, Michael Hudson wrote: > > [ Compiler exceptions (as opposed to runtime exceptions) sucks ] > > > The fix? Presumably rigging compile.c:com_error to set up tracebacks > > properly? It looks like it *tries* to, but I don't know this area of > > the code well enough to understand why it doesn't work. Anyone? > > Have you seen this ? > > http://sourceforge.net/patch/?func=detailpatch&patch_id=101782&group_id=5470 No, and it doesn't patch cleanly right now and I haven't got the time to sort that out just yet, but if it works, it should go in! Cheers, M. -- To summarise the summary of the summary:- people are a problem. -- The Hitch-Hikers Guide to the Galaxy, Episode 12 From mal at lemburg.com Fri Feb 2 14:58:05 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 14:58:05 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: <3A7AA4A9.56F54EFF@lemburg.com>, 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <20010202125555.C81EDA840@darjeeling.zadka.site.co.il> Message-ID: <3A7ABCED.8435D5B7@lemburg.com> Moshe Zadka wrote: > > On Fri, 02 Feb 2001 13:14:33 +0100, "M.-A. Lemburg" 
                              
                              wrote: > > > Anyone else for adding it now on an opt-in basis ? > > Add it on opt-out basis, and if it causes trouble, revert to opt-in > in beta/final. Alphas are supposed to be buggy <0.7 wink> Ok, that makes +5 on including it, no negative response so far. We'll only have to sort out whether to make it opt-in (the current state of the patch) or opt-out. The latter would certainly enable better testing of the code, but I understand that Jeremy doesn't want to destabilize the release just now. Perhaps we'll need a third alpha release ?! (the weak reference implementation and the other goodies need much more testing IMHO than just one alpha cycle) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From barry at digicool.com Fri Feb 2 15:13:22 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 09:13:22 -0500 Subject: [Python-Dev] Showstopper in import? References: <3A7A776E.6ECC626E@lemburg.com> 
                              
                              Message-ID: <14970.49282.501200.102133@anthem.wooz.org> >>>>> "TP" == Tim Peters 
                              
                              writes: TP> Provided the case above is fixed, IDLE will indeed fail to TP> compile anyway, because Guido does TP> from Tkinter import * TP> inside several functions. But that's a different problem. That will probably be the most common breakage in existing code. I've already `fixed' the one such occurance in Tools/pynche. gotta-love-alphas-ly y'rs, -Barry From fredrik at pythonware.com Fri Feb 2 15:14:30 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Fri, 2 Feb 2001 15:14:30 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: <3A7AA4A9.56F54EFF@lemburg.com>, 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <20010202125555.C81EDA840@darjeeling.zadka.site.co.il> <3A7ABCED.8435D5B7@lemburg.com> Message-ID: <000701c08d22$763911f0$0900a8c0@SPIFF> mal wrote: > We'll only have to sort out whether to make it opt-in (the > current state of the patch) or opt-out. The latter would > certainly enable better testing of the code, but I understand > that Jeremy doesn't want to destabilize the release just now. > > Perhaps we'll need a third alpha release ?! (the weak reference > implementation and the other goodies need much more testing > IMHO than just one alpha cycle) +1 on opt-out and an extra alpha to hammer on weak refs, nested namespaces, and pymalloc. +0 on pymalloc opt-in and no third alpha -1 on function attri, oops, to late. Cheers /F From barry at digicool.com Fri Feb 2 15:19:36 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 09:19:36 -0500 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> Message-ID: <14970.49656.634425.131854@anthem.wooz.org> >>>>> "M" == M 
                              
                              writes: M> I don't understand why we cannot take the risk of trying this M> out in an alpha version. Logistically, we probably need BDFL pronouncement on this and if we're to get alpha2 out today, that won't happen in time. If we don't get the alpha out today, we probably will not get the first beta out by IPC9, and I think that's important too. So I'd be +1 on adding it opt-in for beta1, which would make the code available to all, and allow us the full beta cycle and 2.2 development cycle to do the micro benchmarks and evaluation for opt-out (or simply always on) in 2.2. -Barry From mal at lemburg.com Fri Feb 2 15:57:18 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 15:57:18 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <14970.49656.634425.131854@anthem.wooz.org> Message-ID: <3A7ACACE.679D372@lemburg.com> "Barry A. Warsaw" wrote: > > >>>>> "M" == M 
                              
                              writes: > > M> I don't understand why we cannot take the risk of trying this > M> out in an alpha version. > > Logistically, we probably need BDFL pronouncement on this and if we're > to get alpha2 out today, that won't happen in time. If we don't get > the alpha out today, we probably will not get the first beta out by > IPC9, and I think that's important too. With the recent additions of rather important changes I see the need for a third alpha, so getting the beta out for IPC9 will probably not work anyway. Let's get the alpha 2 out today and then add pymalloc to alpha 3. > So I'd be +1 on adding it opt-in for beta1, which would make the code > available to all, and allow us the full beta cycle and 2.2 development > cycle to do the micro benchmarks and evaluation for opt-out (or simply > always on) in 2.2. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From vladimir.marangozov at optimay.com Fri Feb 2 16:10:05 2001 From: vladimir.marangozov at optimay.com (Vladimir Marangozov) Date: Fri, 2 Feb 2001 16:10:05 +0100 Subject: [Python-Dev] A word from the author (was "pymalloc", was "fun", was "2.1 slowe r than 2.0") Message-ID: <4C99842BC5F6D411A6A000805FBBB199051F5B@ge0057exch01.micro.lucent.com> Hi all, [MAL] > >>>>> "M" == M 
                              
                              writes: > > M> I don't understand why we cannot take the risk of trying this > M> out in an alpha version. Because the risk (long-term) is kind of unknown. obmalloc works fine, and it speeds things up, yes, in most setups or circumstances. It gains that speed from the Python core "memory pattern", which is by far the dominant, no matter what the app is. Tim's statement about my profiling is kind of a guess (Hi Tim!) [Barry] > > Logistically, we probably need BDFL pronouncement on this and if we're > to get alpha2 out today, that won't happen in time. If we don't get > the alpha out today, we probably will not get the first beta out by > IPC9, and I think that's important too. > > So I'd be +1 on adding it opt-in for beta1, which would make the code > available to all, and allow us the full beta cycle and 2.2 development > cycle to do the micro benchmarks and evaluation for opt-out (or simply > always on) in 2.2. I'd say, opt-in for 2.1. No risk, enables profiling. My main reservation is about thread safety from extensions (but this could be dealt with at a later stage) + a couple of other minor things I have no time to explain right now. But by that time (2.2), I do plan to show up on a more regular basis. Phew! You guys have done a lot for 3 months. I'll need another three to catch up 
                              
                              . Cheers, Vladimir From skip at mojam.com Fri Feb 2 16:34:04 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 09:34:04 -0600 (CST) Subject: [Python-Dev] creating __all__ in extension modules Message-ID: <14970.54124.352613.111534@beluga.mojam.com> I'm diving into adding __all__ lists to extension modules. My assumption is that since it is a much more deliberate decision to add a symbol to an extension module's module dict, that any key in the module's dict that doesn't begin with an underscore is to be exported. (This in contrast to Python modules where lots of cruft creeps in.) If you think this assumption is incorrect and some other approach is needed, speak now. Thanks, Skip From fredrik at effbot.org Fri Feb 2 16:54:13 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Fri, 2 Feb 2001 16:54:13 +0100 Subject: [Python-Dev] creating __all__ in extension modules References: <14970.54124.352613.111534@beluga.mojam.com> Message-ID: <034f01c08d30$65e5cec0$e46940d5@hagrid> Skip Montanaro wrote: > I'm diving into adding __all__ lists to extension modules. My assumption is > that since it is a much more deliberate decision to add a symbol to an > extension module's module dict, that any key in the module's dict that > doesn't begin with an underscore is to be exported. what's the point? doesn't from-import already do exactly that on C extensions?  From jeremy at alum.mit.edu Fri Feb 2 16:51:26 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 2 Feb 2001 10:51:26 -0500 (EST) Subject: [Python-Dev] Showstopper in import? In-Reply-To: 
                              
                              References: <3A7A8710.D8A51718@lemburg.com> 
                              
                              Message-ID: <14970.55166.436171.625668@w221.z064000254.bwi-md.dsl.cnc.net> MAL> Better issue a warning than raise an exception here ! TP> If Jeremy can't generate correct code, a warning is too weak. MAL> So this is the price we pay for having nested scopes... :-( TP> I don't know. It apparently is the state of the code at this TP> instant. The code is complaining about 'from ... import *' with nested scopes, because of a potential ambiguity: def f(): from string import * def g(s): return strip(s) It is unclear whether this code intends to use a global named strip or to the name strip defined in f() by 'from string import *'. It is possible, I'm sure, to complain about only those cases where free variables exist in a nested scope and 'from ... import *' is used. I don't know if I will be able to modify the compiler so it complains about *only* these cases in time for 2.1a2. Jeremy From fdrake at acm.org Fri Feb 2 16:48:27 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 2 Feb 2001 10:48:27 -0500 (EST) Subject: [Python-Dev] Doc tree frozen for 2.1a2 Message-ID: <14970.54987.29292.178440@cj42289-a.reston1.va.home.com> The Doc/ tree in the Python CVS is frozen until Python 2.1a2 has been released. No further changes should be made in that part of the tree. -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From jeremy at alum.mit.edu Fri Feb 2 16:54:42 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 2 Feb 2001 10:54:42 -0500 (EST) Subject: [Python-Dev] insertdict slower? In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <14970.55362.332519.654243@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TP" == Tim Peters 
                              
                              writes: TP> [Jeremy] >> I was curious about what the DictCreation microbenchmark in >> pybench was slower (about 15%) with 2.1 than with 2.0. I ran >> both with profiling enabled (-pg, no -O) and see that insertdict >> is a fair bit slower in 2.1. Anyone with dict implementation >> expertise want to hazard a guess about this? TP> You don't need to be an expert for this one: just look at the TP> code! There's nothing to it, and not even a comment has changed TP> in insertdict since 2.0. I don't believe the profile. [...] TP> So you're looking at a buggy profiler, a buggy profiling TP> procedure, or a Cache Mystery (the catch-all excuse for anything TP> that's incomprehensible without HW-level monitoring tools). TP> [...] I wanted to be sure that some other change to the dictionary code didn't have the unintended consequence of slowing down insertdict. There is a real and measurable slowdown in MAL's DictCreation microbenchmark, which needs to be explained somehow. insertdict sounds more plausible than many other explanations. But I don't have any more time to think about this before the release. Jeremy From mal at lemburg.com Fri Feb 2 17:40:00 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 17:40:00 +0100 Subject: [Python-Dev] Showstopper in import? References: <3A7A8710.D8A51718@lemburg.com> 
                              
                              <14970.55166.436171.625668@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A7AE2DF.A2D17129@lemburg.com> Jeremy Hylton wrote: > > MAL> Better issue a warning than raise an exception here ! > > TP> If Jeremy can't generate correct code, a warning is too weak. > > MAL> So this is the price we pay for having nested scopes... :-( > > TP> I don't know. It apparently is the state of the code at this > TP> instant. > > The code is complaining about 'from ... import *' with nested scopes, > because of a potential ambiguity: > > def f(): > from string import * > def g(s): > return strip(s) > > It is unclear whether this code intends to use a global named strip or > to the name strip defined in f() by 'from string import *'. The right thing to do in this situation is for Python to walk up the nested scopes and look for the "strip" symbol. > It is possible, I'm sure, to complain about only those cases where > free variables exist in a nested scope and 'from ... import *' is > used. I don't know if I will be able to modify the compiler so it > complains about *only* these cases in time for 2.1a2. Since this is backward compatible, wouldn't it suffice to simply use LOAD_GLOBAL for all nested scopes below the first scope which uses from ... import * ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From jeremy at alum.mit.edu Fri Feb 2 18:07:55 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 2 Feb 2001 12:07:55 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SDM" == Steven D Majewski 
                              
                              writes: SDM> I see from one of the comments on my patch #103459 that there SDM> is a history to this issue (patch #103154) SDM> I had assumed that renaming modules and possibly breaking SDM> existing code was not an option, but this seems to have been SDM> considered in the discussion on that earlier patch. SDM> Is there any consensus on how to deal with this ? SDM> I would *really* like to get SOME fix -- either my patch, or a SDM> renaming of FCNTL, TERMIOS, SOCKET, into the next release. Our plan is to remove all of these modules and move the constants they define into the modules that provide the interface. Fred has already removed SOCKET, since all the constants are defined in socket. I don't think we'll get to the others in time for 2.1a2. SDM> It's not clear to me whether the issues on other systems are SDM> the same. On mac-osx, the OS is BSD unix based and when using SDM> a unix file system, it's case sensitive. But the standard SDM> filesystem is Apple's HFS+, which is case preserving but case SDM> insensitive. ( That means that opening "abc" will succeed if SDM> there is a file named "abc", "ABC", "Abc" , "aBc" ... , but a SDM> directory listing will show "abc" ) SDM> I had guessed that the CHECK_IMPORT_CASE ifdefs and the SDM> corresponding configure switch were there for this sort of SDM> problem, and all I had to do was add a macosx implementation of SDM> check_case(), but returning false from check_case causes the SDM> search to fail -- it does not continue until it find a matching SDM> module. Guido is strongly opposed to continuing after check_case returns false. His explanation is that imports ought to work whether all the there are multiple directories on sys.path or all the files are copied into a single directory. Obviously on file systems like HFS+, it would be impossible to have FCNTL.py and fcntl.py be in the same directory. Jeremy From skip at mojam.com Fri Feb 2 18:14:51 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 11:14:51 -0600 (CST) Subject: [Python-Dev] Showstopper in import? In-Reply-To: <3A7A8710.D8A51718@lemburg.com> References: 
                              
                              <3A7A8710.D8A51718@lemburg.com> Message-ID: <14970.60171.311859.92551@beluga.mojam.com> MAL> Even though I agree that "from x import *" is bad style, it is MAL> quite common in testing code or code which imports a set of symbols MAL> from generated modules or modules containing only constants MAL> e.g. for protocols, error codes, etc. In fact, the entire exercise of making "from x import *" obey __all__ when it's present is to at least reduce the "badness" of this bad style. Skip From skip at mojam.com Fri Feb 2 18:16:40 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 11:16:40 -0600 (CST) Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: <3A7AA4A9.56F54EFF@lemburg.com> References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <3A7AA4A9.56F54EFF@lemburg.com> Message-ID: <14970.60280.654349.189487@beluga.mojam.com> MAL> Anyone else for adding it now on an opt-in basis ? +1 from me. Skip From sdm7g at virginia.edu Fri Feb 2 18:18:40 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 2 Feb 2001 12:18:40 -0500 (EST) Subject: [Python-Dev] Case sensitive import In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Fri, 2 Feb 2001, Tim Peters wrote: > I'd rather see the same rule used everywhere (keep going until finding an > exact match), and tough beans to the person who writes > > import String > > on Windows (or Mac) intending "string". Windows probably still needs a > unique wart to deal with case-destroying network filesystems, though. I agree, and that's what my patch does for macosx.darwin (or any unixy system that happens to have a filesystem with similar semantics -- if there is any such beast.) If the issues for windows are different (and it sounds like they are) then I wanted to make sure (collectively) you were aware that this patch could be addressed independently, rather than waiting on a resolution of those other problems. > It's still terrible style to *rely* on case-sensitivity in file names, and > all such crap should be purged from the Python distribution regardless. I agree. However, even if we purged all only-case-differing file names, without a patch on macosx, you still can crash python with a miscase typo, as it'll try to import the same module twice under a different name: >>> import cStringIO >>> import cstringio dyld: python2.0 multiple definitions of symbol _initcStringIO /usr/local/lib/python2.0/lib-dynload/cStringIO.so definition of _initcStringIO /usr/local/lib/python2.0/lib-dynload/cstringio.so definition of _initcStringIO while with the patch, I get: ImportError: No module named cstringio ---| Steven D. Majewski (804-982-0831) 
                              
                              |--- ---| Department of Molecular Physiology and Biological Physics |--- ---| University of Virginia Health Sciences Center |--- ---| P.O. Box 10011 Charlottesville, VA 22906-0011 |--- "All operating systems want to be unix, All programming languages want to be lisp." From mal at lemburg.com Fri Feb 2 18:19:20 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 18:19:20 +0100 Subject: [Python-Dev] insertdict slower? References: 
                              
                              <14970.55362.332519.654243@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A7AEC18.BEA891B@lemburg.com> Jeremy Hylton wrote: > > >>>>> "TP" == Tim Peters 
                              
                              writes: > > TP> [Jeremy] > >> I was curious about what the DictCreation microbenchmark in > >> pybench was slower (about 15%) with 2.1 than with 2.0. I ran > >> both with profiling enabled (-pg, no -O) and see that insertdict > >> is a fair bit slower in 2.1. Anyone with dict implementation > >> expertise want to hazard a guess about this? > > TP> You don't need to be an expert for this one: just look at the > TP> code! There's nothing to it, and not even a comment has changed > TP> in insertdict since 2.0. I don't believe the profile. > > [...] > > TP> So you're looking at a buggy profiler, a buggy profiling > TP> procedure, or a Cache Mystery (the catch-all excuse for anything > TP> that's incomprehensible without HW-level monitoring tools). > TP> [...] > > I wanted to be sure that some other change to the dictionary code > didn't have the unintended consequence of slowing down insertdict. > There is a real and measurable slowdown in MAL's DictCreation > microbenchmark, which needs to be explained somehow. insertdict > sounds more plausible than many other explanations. But I don't have > any more time to think about this before the release. The benchmark uses integers as keys, so Fred's string optimization isn't used. Instead, PyObject_RichCompareBool() gets triggered and this probably causes the slowdown. You should notice a similar slowdown for all non-string keys. Since dictionaries only check for equality, perhaps we should tweak the rich compare implementation to provide a highly optimized implementation for this single case ?! -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From barry at digicool.com Fri Feb 2 18:23:55 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 12:23:55 -0500 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: 
                              
                              <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <14970.49656.634425.131854@anthem.wooz.org> <3A7ACACE.679D372@lemburg.com> Message-ID: <14970.60715.484580.346561@anthem.wooz.org> >>>>> "M" == M 
                              
                              writes: M> With the recent additions of rather important changes I see the M> need for a third alpha, so getting the beta out for IPC9 will M> probably not work anyway. M> Let's get the alpha 2 out today and then add pymalloc to alpha M> 3. It might be fun 
                              
                              , then to have a bof or devday discussion about some of the new features. bringing-my-asbestos-longjohns-ly y'rs, -Barry From skip at mojam.com Fri Feb 2 18:24:30 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 11:24:30 -0600 (CST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <034f01c08d30$65e5cec0$e46940d5@hagrid> References: <14970.54124.352613.111534@beluga.mojam.com> <034f01c08d30$65e5cec0$e46940d5@hagrid> Message-ID: <14970.60750.570192.452062@beluga.mojam.com> Fredrik> what's the point? doesn't from-import already do exactly that Fredrik> on C extensions? Consider os. At one point it does "from posix import *". Okay, which symbols now in its local namespace came from posix and which from its own devices? It's a lot easier to do from posix import __all__ as _all __all__.extend(_all) del _all than to muck about importing posix, looping over its dict, then incorporating what it finds. It also makes things a bit more consistent for introspective tools. Skip From sdm7g at virginia.edu Fri Feb 2 18:46:23 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 2 Feb 2001 12:46:23 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              On Fri, 2 Feb 2001, Jeremy Hylton wrote: > > Our plan is to remove all of these modules and move the constants they > define into the modules that provide the interface. Fred has already > removed SOCKET, since all the constants are defined in socket. I > don't think we'll get to the others in time for 2.1a2. > > Guido is strongly opposed to continuing after check_case returns > false. His explanation is that imports ought to work whether all the > there are multiple directories on sys.path or all the files are copied > into a single directory. Obviously on file systems like HFS+, it > would be impossible to have FCNTL.py and fcntl.py be in the same > directory. This is in my previous message to the list, but since there seems to be (from my end, anyway) a long delay in list propagation, I'll repeat to you, Jeremy: The other problem is that without a patch, you can crash python with a mis-cased typo, as it tries to import the same module under two names: >>> import cStringIO >>> import cstringio dyld: python2.0 multiple definitions of symbol _initcStringIO /usr/local/lib/python2.0/lib-dynload/cStringIO.so definition of _initcStringIO /usr/local/lib/python2.0/lib-dynload/cstringio.so definition of _initcStringIO [ crash and burn back to shell prompt... ] instead of (with patch): >>> import cstringio Traceback (most recent call last): File "
                              
                              ", line 1, in ? ImportError: No module named cstringio >>> A .py module doesn't crash like a .so module, but it still yields two (or more) different modules for each case spelling, which could be the source of some pretty hard to find bugs when MyModule.val != mymodule.val. ( Which is a more innocent mistake than the person who actually writes two different files for MyModule.py and mymodule.py ! ) ---| Steven D. Majewski (804-982-0831) 
                              
                              |--- ---| Department of Molecular Physiology and Biological Physics |--- ---| University of Virginia Health Sciences Center |--- ---| P.O. Box 10011 Charlottesville, VA 22906-0011 |--- "All operating systems want to be unix, All programming languages want to be lisp." From skip at mojam.com Fri Feb 2 18:54:24 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 11:54:24 -0600 (CST) Subject: [Python-Dev] Diamond x Jungle Carpet Python In-Reply-To: <20010202072422.6B673F4DD@mail.python.org> References: <20010202072422.6B673F4DD@mail.python.org> Message-ID: <14970.62544.580964.817325@beluga.mojam.com> Rod> I have several Diamond x Jungle Capret Pythons for SALE. Rod> Make me an offer.... I don't know Rod. Are they case-sensitive? What's their performance on regular expressions? Do they pass the 2.1a1 regression test suite? Have you been able to train them to understand function attributes? (Though the picture does show a lovely snake, I do believe you hit the wrong mailing list with your offer. The only python's we deal with here are the electronic programming language variety...) :-) -- Skip Montanaro (skip at mojam.com) Support Mojam & Musi-Cal: http://www.musi-cal.com/sponsor.shtml (847)971-7098 From skip at mojam.com Fri Feb 2 18:50:33 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 11:50:33 -0600 (CST) Subject: [Python-Dev] Case sensitive import In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <14970.62313.653086.107554@beluga.mojam.com> Tim> It's still terrible style to *rely* on case-sensitivity in file Tim> names, and all such crap should be purged from the Python Tim> distribution regardless. Then the Python directory or the python executable should be renamed. I sense some deja vu here... anyone-for-a.out?-ly y'rs, Skip From fdrake at acm.org Fri Feb 2 18:56:27 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 2 Feb 2001 12:56:27 -0500 (EST) Subject: [Python-Dev] Python 2.1 alpha 2 docs released Message-ID: <14970.62667.518807.370544@cj42289-a.reston1.va.home.com> The documentation for the Python 2.1 alpha 2 release is now available. View it online at: http://python.sourceforge.net/devel-docs/ (This version will be updated as the documentation evolves, so will be updated beyond what's in the downloadable packages.) Downloadable packages in many formats are also available at: ftp://ftp.python.org/pub/python/doc/2.1a2/ Please avoid printing this documentation -- it's for the alpha, and could waste entire forests! Thanks to everyone who has helped improve the documentation! As always, suggestions and bug reports are welcome. For more instructions on how to file bug reports and where to send suggestions for improvement, see: http://python.sourceforge.net/devel-docs/about.html -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From barry at digicool.com Fri Feb 2 19:34:59 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 13:34:59 -0500 Subject: [Python-Dev] Case sensitive import. References: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              Message-ID: <14970.64979.584372.4671@anthem.wooz.org> Steve, I'm tasked with look at your patch for 2.1a2, and I have some questions and issues (since I'm just spinning up on this). First, what is the relationship of patch #103495 with the Cygwin patch #103154? They look like they address similar issues. Would you say that yours subsumes 103154, or at least will solve some of the problems jlt63 talks about in his patch? The other problem is that I do not have a Cygwin system to test on, and my wife isn't (yet :) psyched for me to do much debugging on her Mac (which doesn't have MacOSX on it yet). The best I can do is make sure your patch applies cleanly and doesn't break the Linux build. Would that work for you for 2.1a2? -Barry From sdm7g at virginia.edu Fri Feb 2 19:46:32 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 2 Feb 2001 13:46:32 -0500 (EST) Subject: [Python-Dev] Case sensitive import In-Reply-To: <14970.62313.653086.107554@beluga.mojam.com> Message-ID: 
                              
                              On Fri, 2 Feb 2001, Skip Montanaro wrote: > Tim> It's still terrible style to *rely* on case-sensitivity in file > Tim> names, and all such crap should be purged from the Python > Tim> distribution regardless. > > Then the Python directory or the python executable should be renamed. I > sense some deja vu here... > > anyone-for-a.out?-ly y'rs, I was going to suggest renaming the Python/ directory to Core/, but what happens when it tries to dump core ? -- Steve From barry at digicool.com Fri Feb 2 19:50:45 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 13:50:45 -0500 Subject: [Python-Dev] Case sensitive import References: <14970.62313.653086.107554@beluga.mojam.com> 
                              
                              Message-ID: <14971.389.284504.519600@anthem.wooz.org> >>>>> "SDM" == Steven D Majewski 
                              
                              writes: SDM> I was going to suggest renaming the Python/ directory to SDM> Core/, but what happens when it tries to dump core ? Interpreter/ ?? 8-dot-3-ly y'rs, -Barry From barry at digicool.com Fri Feb 2 19:53:48 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 13:53:48 -0500 Subject: [Python-Dev] Case sensitive import. References: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              <14970.64979.584372.4671@anthem.wooz.org> Message-ID: <14971.572.369273.721571@anthem.wooz.org> >>>>> "BAW" == Barry A Warsaw 
                              
                              writes: BAW> I'm tasked with look at your patch for 2.1a2, and I have some BAW> questions and issues (since I'm just spinning up on this). Steve, your patch is slightly broken for Linux (RH 6.1), which doesn't have a d_namelen slot in the struct dirent. I wormed around that by testing on #ifdef _DIRENT_HAVE_D_NAMLEN which appears to be the Linuxy way of determining the existance of this slot. If it's missing, I just strlen(dp->d_name). I'm doing a "make test" now and will test import of getpass to make sure it doesn't break on Linux. If it looks good, I'll upload a new version of the patch (which also contains consistent C style fixes) to SF and commit the patch for 2.1a2. -Barry From barry at digicool.com Fri Feb 2 20:05:40 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 14:05:40 -0500 Subject: [Python-Dev] Case sensitive import. References: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              <14970.64979.584372.4671@anthem.wooz.org> <14971.572.369273.721571@anthem.wooz.org> Message-ID: <14971.1284.474393.800832@anthem.wooz.org> Patch passes regr test and import getpass on Linux, so I'm prepared to commit it for 2.1a2. Y'all are going to have to stress test it on other platforms. -Barry From sdm7g at virginia.edu Fri Feb 2 21:23:29 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 2 Feb 2001 15:23:29 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14971.1284.474393.800832@anthem.wooz.org> Message-ID: 
                              
                              On Fri, 2 Feb 2001, Barry A. Warsaw wrote: > Patch passes regr test and import getpass on Linux, so I'm prepared to > commit it for 2.1a2. Y'all are going to have to stress test it on > other platforms. Revised patch builds on macosx. 'make test' finds the same 4 unrelated errors it always gets on macosx, so it's not any worse than before. It passes my own test cases for this problem. ---| Steven D. Majewski (804-982-0831) 
                              
                              |--- ---| Department of Molecular Physiology and Biological Physics |--- ---| University of Virginia Health Sciences Center |--- ---| P.O. Box 10011 Charlottesville, VA 22906-0011 |--- "All operating systems want to be unix, All programming languages want to be lisp." From barry at digicool.com Fri Feb 2 21:23:58 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 15:23:58 -0500 Subject: [Python-Dev] Case sensitive import. References: <14971.1284.474393.800832@anthem.wooz.org> 
                              
                              Message-ID: <14971.5982.164358.917299@anthem.wooz.org> Great, thanks Steve. Jeremy, go for it. -Barry From nas at arctrix.com Fri Feb 2 22:37:06 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Fri, 2 Feb 2001 13:37:06 -0800 Subject: [Python-Dev] Case sensitive import In-Reply-To: <14971.389.284504.519600@anthem.wooz.org>; from barry@digicool.com on Fri, Feb 02, 2001 at 01:50:45PM -0500 References: <14970.62313.653086.107554@beluga.mojam.com> 
                              
                              <14971.389.284504.519600@anthem.wooz.org> Message-ID: <20010202133706.A29820@glacier.fnational.com> On Fri, Feb 02, 2001 at 01:50:45PM -0500, Barry A. Warsaw wrote: > > >>>>> "SDM" == Steven D Majewski 
                              
                              writes: > > SDM> I was going to suggest renaming the Python/ directory to > SDM> Core/, but what happens when it tries to dump core ? > > Interpreter/ ?? If we do bite the bullet and make this change I vote for PyCore. Neil From sdm7g at virginia.edu Fri Feb 2 23:40:10 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 2 Feb 2001 17:40:10 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14970.64979.584372.4671@anthem.wooz.org> Message-ID: 
                              
                              I don't have Cygwin either and what's more, I don't do much with MS-Windows, so I'm not familiar with some of the functions called in that patch. HFS+ filesystem on MacOSX is case preserving but case insensitive, which means that open("File") succeeds for any of: "file","File","FILE" ... The dirent functions verifies that there is in fact a "File" in that directory, and if not continues the search. There was some discussion about whether it should be #ifdef-ed diferently or more specifically. I don't know if any other system than macosx or Cygwin (if it works on that platform) require that test. (Although I'm glad you got it to compile on Linux, since the other likely case I can think of is LinuxPPC with a mac filesystem.) I guess if it compiles, then it doesn't hurt, except for the extra overhead. ( But, since it continues looking for a match, I couldn't use the CHECK_IMPORT_CASE switch. ) -- Steve On Fri, 2 Feb 2001, Barry A. Warsaw wrote: > First, what is the relationship of patch #103495 with the Cygwin patch > #103154? They look like they address similar issues. Would you say > that yours subsumes 103154, or at least will solve some of the > problems jlt63 talks about in his patch? > > The other problem is that I do not have a Cygwin system to test on, > and my wife isn't (yet :) psyched for me to do much debugging on her > Mac (which doesn't have MacOSX on it yet). The best I can do is make > sure your patch applies cleanly and doesn't break the Linux build. > Would that work for you for 2.1a2? From fredrik at effbot.org Fri Feb 2 21:49:47 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Fri, 2 Feb 2001 21:49:47 +0100 Subject: [Python-Dev] Diamond x Jungle Carpet Python References: <20010202072422.6B673F4DD@mail.python.org> <14970.62544.580964.817325@beluga.mojam.com> Message-ID: <00c401c08d5b$090ed040$e46940d5@hagrid> Skip wrote: > (Though the picture does show a lovely snake, I do believe you hit the wrong > mailing list with your offer. The only python's we deal with here are the > electronic programming language variety...) he's spammed every single python list, and many python "celebrities". I got a bunch this morning (I'm obviously using too many mail aliases), and have gotten several daily-URL contributions from people who thought it was cute when they saw the *first* copy... Cheers /F From skip at mojam.com Fri Feb 2 23:07:43 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 16:07:43 -0600 (CST) Subject: [Python-Dev] Waaay off topic, but I felt I had to share this... Message-ID: <14971.12207.566272.185258@beluga.mojam.com> Most of you know I have my feelers out looking for work. I've registered with a number of online job sites like Monster.com and Hotjobs.com. These sites allow you to set up "agents" that scan their database for new job postings that match your search criteria. Today I got an interesting "match" from Hotjobs.com's agent: ***Your Chicago Software agent yielded 1 jobs: 1. Vice President - Internet Technology Playboy Enterprises Inc. http://www.hotjobs.com/cgi-bin/job-show-mysql?J__PINDEX=J612497NR I wonder if they know something they're not telling me? Could it be that the chrome on my dome *is* actually a sign of virility? The job responsibilities sound interesting for someone about half my age: Research, design and direct the implementation of state-of-the-art applications and database technologies to support Playboy.com's products and services. I wonder how committed they are to research? Alas, they aren't looking for Python skills, so I'm not going to apply. Maybe I should hook them up with the guy selling snakes... Skip From skip at mojam.com Fri Feb 2 22:24:50 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 15:24:50 -0600 (CST) Subject: [Python-Dev] Case sensitive import In-Reply-To: 
                              
                              References: <14970.62313.653086.107554@beluga.mojam.com> 
                              
                              Message-ID: <14971.9634.992818.225516@beluga.mojam.com> Steve> I was going to suggest renaming the Python/ directory to Core/, Steve> but what happens when it tries to dump core ? PyCore? There was a thread on this recently, and Guido nixed the idea of renaming anything, but I can't remember what his rationale was. Something about breaking build instructions somewhere? Skip From jeremy at alum.mit.edu Sat Feb 3 00:39:51 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 2 Feb 2001 18:39:51 -0500 (EST) Subject: [Python-Dev] Python 2.1 alpha 2 released Message-ID: <14971.17735.263154.15769@w221.z064000254.bwi-md.dsl.cnc.net> While Guido was working the press circuit at the LinuxWorld Expo in New York City, the Python developers, including the many volunteers and the folks from PythonLabs, were busy finishing the second alpha release of Python 2.1. The release is currently available from SourceForge and will also be available from python.org later today. You can find the source release at: http://sourceforge.net/project/showfiles.php?group_id=5470 The Windows installer will be ready shortly. Fred Drake announced the documentation release earlier today. You can browse the new docs online at http://python.sourceforge.net/devel-docs/ or download them from ftp://ftp.python.org/pub/python/doc/2.1a2/ Please give it a good try! The only way Python 2.1 can become a rock-solid product is if people test the alpha releases. If you are using Python for demanding applications or on extreme platforms, we are particularly interested in hearing your feedback. Are you embedding Python or using threads? Please test your application using Python 2.1a2! Please submit all bug reports through SourceForge: http://sourceforge.net/bugs/?group_id=5470 Here's the NEWS file: What's New in Python 2.1 alpha 2? ================================= Core language, builtins, and interpreter - Scopes nest. If a name is used in a function or class, but is not local, the definition in the nearest enclosing function scope will be used. One consequence of this change is that lambda statements could reference variables in the namespaces where the lambda is defined. In some unusual cases, this change will break code. In all previous version of Python, names were resolved in exactly three namespaces -- the local namespace, the global namespace, and the builtin namespace. According to this old definition, if a function A is defined within a function B, the names bound in B are not visible in A. The new rules make names bound in B visible in A, unless A contains a name binding that hides the binding in B. Section 4.1 of the reference manual describes the new scoping rules in detail. The test script in Lib/test/test_scope.py demonstrates some of the effects of the change. The new rules will cause existing code to break if it defines nested functions where an outer function has local variables with the same name as globals or builtins used by the inner function. Example: def munge(str): def helper(x): return str(x) if type(str) != type(''): str = helper(str) return str.strip() Under the old rules, the name str in helper() is bound to the builtin function str(). Under the new rules, it will be bound to the argument named str and an error will occur when helper() is called. - The compiler will report a SyntaxError if "from ... import *" occurs in a function or class scope. The language reference has documented that this case is illegal, but the compiler never checked for it. The recent introduction of nested scope makes the meaning of this form of name binding ambiguous. In a future release, the compiler may allow this form when there is no possibility of ambiguity. - repr(string) is easier to read, now using hex escapes instead of octal, and using \t, \n and \r instead of \011, \012 and \015 (respectively): >>> "\texample \r\n" + chr(0) + chr(255) '\texample \r\n\x00\xff' # in 2.1 '\011example \015\012\000\377' # in 2.0 - Functions are now compared and hashed by identity, not by value, since the func_code attribute is writable. - Weak references (PEP 205) have been added. This involves a few changes in the core, an extension module (_weakref), and a Python module (weakref). The weakref module is the public interface. It includes support for "explicit" weak references, proxy objects, and mappings with weakly held values. - A 'continue' statement can now appear in a try block within the body of a loop. It is still not possible to use continue in a finally clause. Standard library - mailbox.py now has a new class, PortableUnixMailbox which is identical to UnixMailbox but uses a more portable scheme for determining From_ separators. Also, the constructors for all the classes in this module have a new optional `factory' argument, which is a callable used when new message classes must be instantiated by the next() method. - random.py is now self-contained, and offers all the functionality of the now-deprecated whrandom.py. See the docs for details. random.py also supports new functions getstate() and setstate(), for saving and restoring the internal state of the generator; and jumpahead(n), for quickly forcing the internal state to be the same as if n calls to random() had been made. The latter is particularly useful for multi- threaded programs, creating one instance of the random.Random() class for each thread, then using .jumpahead() to force each instance to use a non-overlapping segment of the full period. - random.py's seed() function is new. For bit-for-bit compatibility with prior releases, use the whseed function instead. The new seed function addresses two problems: (1) The old function couldn't produce more than about 2**24 distinct internal states; the new one about 2**45 (the best that can be done in the Wichmann-Hill generator). (2) The old function sometimes produced identical internal states when passed distinct integers, and there was no simple way to predict when that would happen; the new one guarantees to produce distinct internal states for all arguments in [0, 27814431486576L). - The socket module now supports raw packets on Linux. The socket family is AF_PACKET. - test_capi.py is a start at running tests of the Python C API. The tests are implemented by the new Modules/_testmodule.c. - A new extension module, _symtable, provides provisional access to the internal symbol table used by the Python compiler. A higher-level interface will be added on top of _symtable in a future release. Windows changes - Build procedure: the zlib project is built in a different way that ensures the zlib header files used can no longer get out of synch with the zlib binary used. See PCbuild\readme.txt for details. Your old zlib-related directories can be deleted; you'll need to download fresh source for zlib and unpack it into a new directory. - Build: New subproject _test for the benefit of test_capi.py (see above). - Build: subproject ucnhash is gone, since the code was folded into the unicodedata subproject. What's New in Python 2.1 alpha 1? ================================= Core language, builtins, and interpreter - There is a new Unicode companion to the PyObject_Str() API called PyObject_Unicode(). It behaves in the same way as the former, but assures that the returned value is an Unicode object (applying the usual coercion if necessary). - The comparison operators support "rich comparison overloading" (PEP 207). C extension types can provide a rich comparison function in the new tp_richcompare slot in the type object. The cmp() function and the C function PyObject_Compare() first try the new rich comparison operators before trying the old 3-way comparison. There is also a new C API PyObject_RichCompare() (which also falls back on the old 3-way comparison, but does not constrain the outcome of the rich comparison to a Boolean result). The rich comparison function takes two objects (at least one of which is guaranteed to have the type that provided the function) and an integer indicating the opcode, which can be Py_LT, Py_LE, Py_EQ, Py_NE, Py_GT, Py_GE (for <, <=, ==, !=, >, >=), and returns a Python object, which may be NotImplemented (in which case the tp_compare slot function is used as a fallback, if defined). Classes can overload individual comparison operators by defining one or more of the methods__lt__, __le__, __eq__, __ne__, __gt__, __ge__. There are no explicit "reflected argument" versions of these; instead, __lt__ and __gt__ are each other's reflection, likewise for__le__ and __ge__; __eq__ and __ne__ are their own reflection (similar at the C level). No other implications are made; in particular, Python does not assume that == is the Boolean inverse of !=, or that < is the Boolean inverse of >=. This makes it possible to define types with partial orderings. Classes or types that want to implement (in)equality tests but not the ordering operators (i.e. unordered types) should implement == and !=, and raise an error for the ordering operators. It is possible to define types whose rich comparison results are not Boolean; e.g. a matrix type might want to return a matrix of bits for A < B, giving elementwise comparisons. Such types should ensure that any interpretation of their value in a Boolean context raises an exception, e.g. by defining __nonzero__ (or the tp_nonzero slot at the C level) to always raise an exception. - Complex numbers use rich comparisons to define == and != but raise an exception for <, <=, > and >=. Unfortunately, this also means that cmp() of two complex numbers raises an exception when the two numbers differ. Since it is not mathematically meaningful to compare complex numbers except for equality, I hope that this doesn't break too much code. - Functions and methods now support getting and setting arbitrarily named attributes (PEP 232). Functions have a new __dict__ (a.k.a. func_dict) which hold the function attributes. Methods get and set attributes on their underlying im_func. It is a TypeError to set an attribute on a bound method. - The xrange() object implementation has been improved so that xrange(sys.maxint) can be used on 64-bit platforms. There's still a limitation that in this case len(xrange(sys.maxint)) can't be calculated, but the common idiom "for i in xrange(sys.maxint)" will work fine as long as the index i doesn't actually reach 2**31. (Python uses regular ints for sequence and string indices; fixing that is much more work.) - Two changes to from...import: 1) "from M import X" now works even if M is not a real module; it's basically a getattr() operation with AttributeError exceptions changed into ImportError. 2) "from M import *" now looks for M.__all__ to decide which names to import; if M.__all__ doesn't exist, it uses M.__dict__.keys() but filters out names starting with '_' as before. Whether or not __all__ exists, there's no restriction on the type of M. - File objects have a new method, xreadlines(). This is the fastest way to iterate over all lines in a file: for line in file.xreadlines(): ...do something to line... See the xreadlines module (mentioned below) for how to do this for other file-like objects. - Even if you don't use file.xreadlines(), you may expect a speedup on line-by-line input. The file.readline() method has been optimized quite a bit in platform-specific ways: on systems (like Linux) that support flockfile(), getc_unlocked(), and funlockfile(), those are used by default. On systems (like Windows) without getc_unlocked(), a complicated (but still thread-safe) method using fgets() is used by default. You can force use of the fgets() method by #define'ing USE_FGETS_IN_GETLINE at build time (it may be faster than getc_unlocked()). You can force fgets() not to be used by #define'ing DONT_USE_FGETS_IN_GETLINE (this is the first thing to try if std test test_bufio.py fails -- and let us know if it does!). - In addition, the fileinput module, while still slower than the other methods on most platforms, has been sped up too, by using file.readlines(sizehint). - Support for run-time warnings has been added, including a new command line option (-W) to specify the disposition of warnings. See the description of the warnings module below. - Extensive changes have been made to the coercion code. This mostly affects extension modules (which can now implement mixed-type numerical operators without having to use coercion), but occasionally, in boundary cases the coercion semantics have changed subtly. Since this was a terrible gray area of the language, this is considered an improvement. Also note that __rcmp__ is no longer supported -- instead of calling __rcmp__, __cmp__ is called with reflected arguments. - In connection with the coercion changes, a new built-in singleton object, NotImplemented is defined. This can be returned for operations that wish to indicate they are not implemented for a particular combination of arguments. From C, this is Py_NotImplemented. - The interpreter accepts now bytecode files on the command line even if they do not have a .pyc or .pyo extension. On Linux, after executing echo ':pyc:M::\x87\xc6\x0d\x0a::/usr/local/bin/python:' > /proc/sys/fs/binfmt_misc/register any byte code file can be used as an executable (i.e. as an argument to execve(2)). - %[xXo] formats of negative Python longs now produce a sign character. In 1.6 and earlier, they never produced a sign, and raised an error if the value of the long was too large to fit in a Python int. In 2.0, they produced a sign if and only if too large to fit in an int. This was inconsistent across platforms (because the size of an int varies across platforms), and inconsistent with hex() and oct(). Example: >>> "%x" % -0x42L '-42' # in 2.1 'ffffffbe' # in 2.0 and before, on 32-bit machines >>> hex(-0x42L) '-0x42L' # in all versions of Python The behavior of %d formats for negative Python longs remains the same as in 2.0 (although in 1.6 and before, they raised an error if the long didn't fit in a Python int). %u formats don't make sense for Python longs, but are allowed and treated the same as %d in 2.1. In 2.0, a negative long formatted via %u produced a sign if and only if too large to fit in an int. In 1.6 and earlier, a negative long formatted via %u raised an error if it was too big to fit in an int. - Dictionary objects have an odd new method, popitem(). This removes an arbitrary item from the dictionary and returns it (in the form of a (key, value) pair). This can be useful for algorithms that use a dictionary as a bag of "to do" items and repeatedly need to pick one item. Such algorithms normally end up running in quadratic time; using popitem() they can usually be made to run in linear time. Standard library - In the time module, the time argument to the functions strftime, localtime, gmtime, asctime and ctime is now optional, defaulting to the current time (in the local timezone). - The ftplib module now defaults to passive mode, which is deemed a more useful default given that clients are often inside firewalls these days. Note that this could break if ftplib is used to connect to a *server* that is inside a firewall, from outside; this is expected to be a very rare situation. To fix that, you can call ftp.set_pasv(0). - The module site now treats .pth files not only for path configuration, but also supports extensions to the initialization code: Lines starting with import are executed. - There's a new module, warnings, which implements a mechanism for issuing and filtering warnings. There are some new built-in exceptions that serve as warning categories, and a new command line option, -W, to control warnings (e.g. -Wi ignores all warnings, -We turns warnings into errors). warnings.warn(message[, category]) issues a warning message; this can also be called from C as PyErr_Warn(category, message). - A new module xreadlines was added. This exports a single factory function, xreadlines(). The intention is that this code is the absolutely fastest way to iterate over all lines in an open file(-like) object: import xreadlines for line in xreadlines.xreadlines(file): ...do something to line... This is equivalent to the previous the speed record holder using file.readlines(sizehint). Note that if file is a real file object (as opposed to a file-like object), this is equivalent: for line in file.xreadlines(): ...do something to line... - The bisect module has new functions bisect_left, insort_left, bisect_right and insort_right. The old names bisect and insort are now aliases for bisect_right and insort_right. XXX_right and XXX_left methods differ in what happens when the new element compares equal to one or more elements already in the list: the XXX_left methods insert to the left, the XXX_right methods to the right. Code that doesn't care where equal elements end up should continue to use the old, short names ("bisect" and "insort"). - The new curses.panel module wraps the panel library that forms part of SYSV curses and ncurses. Contributed by Thomas Gellekum. - The SocketServer module now sets the allow_reuse_address flag by default in the TCPServer class. - A new function, sys._getframe(), returns the stack frame pointer of the caller. This is intended only as a building block for higher-level mechanisms such as string interpolation. Build issues - For Unix (and Unix-compatible) builds, configuration and building of extension modules is now greatly automated. Rather than having to edit the Modules/Setup file to indicate which modules should be built and where their include files and libraries are, a distutils-based setup.py script now takes care of building most extension modules. All extension modules built this way are built as shared libraries. Only a few modules that must be linked statically are still listed in the Setup file; you won't need to edit their configuration. - Python should now build out of the box on Cygwin. If it doesn't, mail to Jason Tishler (jlt63 at users.sourceforge.net). - Python now always uses its own (renamed) implementation of getopt() -- there's too much variation among C library getopt() implementations. - C++ compilers are better supported; the CXX macro is always set to a C++ compiler if one is found. Windows changes - select module: By default under Windows, a select() call can specify no more than 64 sockets. Python now boosts this Microsoft default to 512. If you need even more than that, see the MS docs (you'll need to #define FD_SETSIZE and recompile Python from source). - Support for Windows 3.1, DOS and OS/2 is gone. The Lib/dos-8x3 subdirectory is no more! -- Jeremy Hylton 
                               From skip at mojam.com Sat Feb 3 02:10:11 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 19:10:11 -0600 (CST) Subject: [Python-Dev] linuxaudiodev crashes Message-ID: <14971.23155.335303.830239@beluga.mojam.com> I've been getting this for awhile on my laptop (Mandrake 7.1): test test_linuxaudiodev crashed -- linuxaudiodev.error: (11, 'Resource temporarily unavailable') RealPlayer works fine so I suspect the infrastructure is there and functioning. Anyone else seeing this? Skip From dkwolfe at pacbell.net Sat Feb 3 02:08:43 2001 From: dkwolfe at pacbell.net (Dan Wolfe) Date: Fri, 02 Feb 2001 17:08:43 -0800 Subject: [Python-Dev] Case sensitive import In-Reply-To: 
                              
                              Message-ID: <0G8500859PMIQL@mta5.snfc21.pbi.net> It's been suggested (eg pyCore).... and shot down.... uhh, IIRC, due to "millions and millions of Python developers" (thanks Tim! 
                              
                              ) who don't want to change their directory structures and the fact that nobody wanted to lose the CVS log files/do the clean up... Alas, we gonna go around and around until we either decide to bite the bullet and "just do it" or allow a multitude of hacks to be put in place to work around the issue... it-may-be-painful-once-but-it's-a-lot-less-painful-than-a-thousand- times'ly yours, - Dan On Friday, February 2, 2001, at 10:46 AM, Steven D. Majewski wrote: > On Fri, 2 Feb 2001, Skip Montanaro wrote: > >> Tim> It's still terrible style to *rely* on case-sensitivity in >> file >> Tim> names, and all such crap should be purged from the Python >> Tim> distribution regardless. >> >> Then the Python directory or the python executable should be >> renamed. I >> sense some deja vu here... >> >> anyone-for-a.out?-ly y'rs, > > > I was going to suggest renaming the Python/ directory to Core/, > but what happens when it tries to dump core ? > > -- Steve From skip at mojam.com Sat Feb 3 03:09:45 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 20:09:45 -0600 (CST) Subject: [Python-Dev] Setup.local is getting zapped Message-ID: <14971.26729.54529.333522@beluga.mojam.com> Modules/Setup.local is getting zapped by some aspect of the build process. Not sure by what step, but mine had lines I added to it a few days ago, and nothing now. It should be treated as Modules/Setup used to be: initialize it if it's absent, don't touch it if it's there. The distclean target looks like the culprit: distclean: clobber -rm -f Makefile Makefile.pre buildno config.status config.log \ config.cache config.h setup.cfg Modules/config.c \ Modules/Setup Modules/Setup.local Modules/Setup.config I've been using it a lot lately to build from scratch, what with the new Makefile and setup.py. Since Setup.local is ostensibly something a user would edit manually and would never have useful content in it as distributed, I don't think even distclean should zap it. Skip From guido at digicool.com Sat Feb 3 03:21:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 02 Feb 2001 21:21:11 -0500 Subject: [Python-Dev] 2.1a2 released Message-ID: <200102030221.VAA09351@cj20424-a.reston1.va.home.com> I noticed that the source tarball and Windows installer were in place on SF and ftp.python.org, so I've updated the webpages python.org and python.org/2.1/. Seems email is wedged again so I don't know when people will get this email and if there was something to wait for -- I presume not. I'll mail an official announcement out tomorrow. Going to bed now...! --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Sat Feb 3 03:25:28 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 2 Feb 2001 21:25:28 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <20010130092454.D18319@glacier.fnational.com>; from nas@arctrix.com on Tue, Jan 30, 2001 at 09:24:54AM -0800 References: <200101300206.VAA21925@cj20424-a.reston1.va.home.com> 
                              
                              <20010130092454.D18319@glacier.fnational.com> Message-ID: <20010202212528.D27105@thyrsus.com> Neil Schemenauer 
                              
                              : > [Tim Peters on adding yet more syntatic sugar] > > Available time is finite, and this isn't at the top of the list > > of things I'd like to see (resuming the discussion of > > generators + coroutines + iteration protocol comes to mind > > first). > > What's the chances of getting generators into 2.2? The > implementation should not be hard. Didn't Steven Majewski have > something years ago? Why do we always get sidetracked on trying > to figure out how to do coroutines and continuations? > > Generators would add real power to the language and are simple > enough that most users could benefit from them. Also, it should be > possible to design an interface that does not preclude the > addition of coroutines or continuations later. I agree. I think this is a very importand growth direction for the language. -- 
                              Eric S. Raymond The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary. -- H.L. Mencken From tim.one at home.com Sat Feb 3 04:38:42 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 22:38:42 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Steven D. Majewski] > HFS+ filesystem on MacOSX is case preserving but case insensitive, Same as Windows. > which means that open("File") succeeds for any of: > "file","File","FILE" ... Ditto. > The dirent functions verifies that there is in fact a "File" in > that directory, and if not continues the search. Which is what Jeremy said Guido is "strongly opposed to": His explanation is that imports ought to work whether all the there are multiple directories on sys.path or all the files are copied into a single directory. Obviously on file systems like HFS+, it would be impossible to have FCNTL.py and fcntl.py be in the same directory. In effect, you wrote your own check_case under a different name that-- unlike all other versions of check_case --ignores case mismatches. As I said before, I don't care for the current rules (and find_module has become such an #ifdef'ed minefield I'm not sure it's possible to tell what it does *anywhere* anymore), but there's no difference here between Windows filesystems and HFS+, so for the sake of basic sanity they must work the same way. So a retroactive -1 on this last-second patch -- and a waaaaay retroactive -1 on Python's behavior on Windows too. From Jason.Tishler at dothill.com Sat Feb 3 04:14:58 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Fri, 2 Feb 2001 22:14:58 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14971.1284.474393.800832@anthem.wooz.org>; from barry@digicool.com on Fri, Feb 02, 2001 at 02:05:40PM -0500 References: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              <14970.64979.584372.4671@anthem.wooz.org> <14971.572.369273.721571@anthem.wooz.org> <14971.1284.474393.800832@anthem.wooz.org> Message-ID: <20010202221458.M1800@dothill.com> On Fri, Feb 02, 2001 at 02:05:40PM -0500, Barry A. Warsaw wrote: > Patch passes regr test and import getpass on Linux, so I'm prepared to > commit it for 2.1a2. Y'all are going to have to stress test it on > other platforms. [Sorry for chiming in late, but my family just had its own beta release... :,)] I will test this on Cygwin ASAP and report back to the list. I really appreciate the inclusion of this patch in 2.1a2. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From tim.one at home.com Sat Feb 3 06:11:11 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 00:11:11 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <3A788E96.AB823FAE@lemburg.com> Message-ID: 
                              
                              [MAL] > ... > Since iterators can define the order in which a data structure is > traversed, this would also do away with the second (supposed) > problem. Except we don't need iterators to do that. If anyone thought it was important, they could change the existing PyDict_Next to force an ordering, and then everything building on that would inherit it. So while I'm in favor of better iteration schemes, I'm not in favor of overselling them (on grounds that aren't unique to them). >> Sorry, but immutability has nothing to do with thread safety ... > Who said that an exception is raised ? I did 
                              
                              . > The method I posted on the mutability thread allows querying > the current state just like you would query the availability > of a resource. This? .mutable([flag]) -> integer If called without argument, returns 1/0 depending on whether the object is mutable or not. When called with a flag argument, sets the mutable state of the object to the value indicated by flag and returns the previous flag state. If I do: if object.mutable(): object.mutate() in a threaded world, the certain (but erratic) outcome is that sometimes it blows up: there's no guarantee that another thread doesn't sneak in and *change* the mutability between the time object.mutable() returns 1 and object.mutate() acts on a bad assumption. Same thing for: if resources.num_printers_available() > 0: action_that_blows_up_if_no_printers_are_available in a threaded world. It's possible to build a thread-safe resource acquisition protocol in either case, but that's really got nothing to do with immutability or iterators (marking a thing immutable doesn't do any good there unless you *also* build a protocol on top of it for communicating state changes, blocking until one occurs, notifications with optional timeouts, etc -- just doing object.mutable(1) is a threaded disaster in the absence of a higher-level protocol guaranteeing that this won't go changing the mutability state in the middle of some other thread's belief that it's got the thing frozen; likewise for object.mutable(0) not stepping on some other thread's belief that it's got permission to mutate). .mutable(flag) is *fine* for what it does, it's simply got nothing to do with threads. Thread safety could *build* on it via coordinated use of a threading.Sempahore (or moral equivalent), though. From tim.one at home.com Sat Feb 3 06:42:06 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 00:42:06 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <14968.37210.886842.820413@beluga.mojam.com> Message-ID: 
                              
                              [Skip Montanaro] > The problem that rolls around in the back of my mind from time-to-time > is that since Python doesn't currently support interfaces, checking > for specific methods seems to be the only reasonable way to determine > if a object does what you want or not. Except that-- alas! --"what I want" is almost always for it to respond to some specific methods. For example, I don't believe I've *ever* written a class that responds to all the "number" methods (in particular, I'm almost certain not to bother implementing a notion of "shift"). It's also rare for me to define a class that implements all the "sequence" or "mapping" methods. So if we had a Interface.Sequence, all my code would still check for individual sequence operations anyway. Take it to the extreme, and each method becomes an Interface unto itself, which then get grouped into collections in different ways by different people, and in the end I *still* check for specific methods rather than fight with umpteen competing hierarchies. The most interesting "interfaces" to me are things like EuclideanDomain: a set of guarantees about how methods *interact*, and almost nothing to do with which methods a thing supports. A simpler example is TotalOrdering: there is no method unique to total orderings, instead it's a guarantee about how cmp *behaves*. If you want know whether an object x supports slicing, *trying* x[:0] is as direct as it gets. You just hope that x isn't an instance of class Human: def __getslice__(self, lo, hi): """Return a list of activities planned for human self. lo and hi bound the timespan of activities to be returned, in seconds from the epoch. If lo is less than the birthdate of self, treat lo as if it were self's birthdate. If hi is greater than the expected lifetime of self, treat hi as if it were the expected lifetime of self, but also send an execution order to ensure that self does not live beyond that time (this may seem drastic, but the alternative was complaints from customers who exceeded their expected lifetimes, and then demanded to know why "the stupid software" cut off their calendars "early" -- hey, we'll implement infinite memory when humans are immortal). """ don't-think-it-hasn't-happened
                              
                              -ly y'rs - tim From tim.one at home.com Sat Feb 3 07:46:08 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 01:46:08 -0500 Subject: [Python-Dev] Case sensitive import In-Reply-To: <0G8500859PMIQL@mta5.snfc21.pbi.net> Message-ID: 
                              
                              [Dan Wolfe] > It's been suggested (eg pyCore).... and shot down.... uhh, IIRC, due > to "millions and millions of Python developers" (thanks Tim! 
                              
                              ) > who don't want to change their directory structures and the fact that > nobody wanted to lose the CVS log files/do the clean up... Don't thank me, thank Bill Gates for creating a wonderful operating system where I get to ignore *all* the 57-varieties-of-Unix build headaches. That's the only reason I'm so cheerful about supporting unbounded damage to everyone else. But, it's a good reason 
                              
                              . BTW, I didn't grok the CVS argument. You don't change the name of the directory, you change the name of the executable. And the obvious name is obvious to me: python.exe 
                              
                              . no-need-to-rewrite-history-ly y'rs - tim From tim.one at home.com Sat Feb 3 07:53:53 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 01:53:53 -0500 Subject: [Python-Dev] Generalized "from M. import X" was RE: Python 2.1 alpha 2 released) In-Reply-To: 
                              
                              Message-ID: 
                              
                              I'm trying to *use* each new feature at least once. It looks like a multiday project 
                              
                              . I remember reading the discussion about this one: [from (old!) NEWS] > ... > - Two changes to from...import: > > 1) "from M import X" now works even if M is not a real module; it's > basically a getattr() operation with AttributeError exceptions > changed into ImportError. but in practice it turns out I have no idea what it means. For example, >>> alist = [] >>> hasattr(alist, "sort") 1 >>> from alist import sort Traceback (most recent call last): File "
                              
                              ", line 1, in ? ImportError: No module named alist >>> No, I don't want to *do* that, but the description above makes me wonder what I'm missing. Or, something I *might* want to do (maybe, on my worst day, and on any other day I'd agree I should be shot for even considering it): class Random: def random(self): pass def seed(self): pass def betavariate(self): pass # etc etc _inst = Random() from _inst import random, seed, betavariate # etc, etc Again complains that there's no module named "_inst". So if M does not in fact need to be a real module, what *does* it need to be? Ah: sticking in sys.modules["alist"] = alist first does the (disgusting) trick. So, next gripe: I don't see anything about this in the 2.1a2 docs, although the Lang Ref's section on "the import statement" has always been vague enough to allow it. The missing piece: when the Lang Ref says something is "implementation and platform specific", where does one go to find out what the deal is for your particular implementation and platform? guess-not-to-NEWS
                              
                              -ly y'rs - tim From moshez at zadka.site.co.il Sat Feb 3 08:12:44 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Sat, 3 Feb 2001 09:12:44 +0200 (IST) Subject: [Python-Dev] Generalized "from M. import X" was RE: Python 2.1 alpha 2 released) In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <20010203071244.A1598A841@darjeeling.zadka.site.co.il> On Sat, 3 Feb 2001 01:53:53 -0500, "Tim Peters" 
                              
                              wrote: > >>> alist = [] > >>> hasattr(alist, "sort") > 1 > >>> from alist import sort > Traceback (most recent call last): > File "
                              
                              ", line 1, in ? > ImportError: No module named alist > >>> Tim, don't you remember to c.l.py discussions? >>> class Foo: ... pass ... >>> foo=Foo() >>> foo.foo = 'foo' >>> import sys >>> sys.modules['foo'] = foo >>> import foo >>> print foo.foo foo >>> from foo import foo >>> print foo foo >>> -- Moshe Zadka 
                              
                              This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From tim.one at home.com Sat Feb 3 08:42:05 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 02:42:05 -0500 Subject: [Python-Dev] Generalized "from M. import X" was RE: Python 2.1 alpha 2 released) In-Reply-To: <20010203071244.A1598A841@darjeeling.zadka.site.co.il> Message-ID: 
                              
                              [Moshe Zadka] > Tim, don't you remember to c.l.py discussions? Unclear whether I don't remember or haven't read them yet: I've got a bit over 800 unread msgs piled up from the last week! About 500 of them showed up since I awoke on Friday. The combo of python.org mail screwups and my ISP's mail screwups is making email life hell lately. and-misery-loves-company
                              
                              -ly y'rs - tim From tim.one at home.com Sat Feb 3 09:17:20 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 03:17:20 -0500 Subject: [Python-Dev] Perverse nesting bug Message-ID: 
                              
                              SF bug reporting is still impossible. Little program: def f(): print "outer f.a is", f.a def f(): print "inner f.a is", f.a f.a = 666 f() f.a = 42 f() I'm not sure what I expected it to do, but most likely an UnboundLocalError (the local f hasn't been bound to yet at the time "print outer" executes). In reality it prints: outer f.a is and then blows up with a null-pointer dereference, here: case LOAD_DEREF: x = freevars[oparg]; w = PyCell_Get(x); Py_INCREF(w); /***** THIS IS THE GUY *****/ PUSH(w); break; Simpler program with same symptom: def f(): print "outer f.a is", f.a def f(): print "inner f.a is", f.a f() I *do* get an UnboundLocalError if the body of the inner "f" is changed to "pass": def f(): # this one works fine! i.e., UnboundLocalError print "outer f.a is", f.a def f(): pass f() You'll also be happy to know that this one prints 666 twice (as it should): def f(): def f(): print "inner f.a is", f.a f.a = 666 f() print "outer f.a is", f.a f.a = 42 f() python-gets-simpler-each-release
                              
                              -ly y'rs - tim From tim.one at home.com Sat Feb 3 09:48:01 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 03:48:01 -0500 Subject: [Python-Dev] Waaay off topic, but I felt I had to share this... In-Reply-To: <14971.12207.566272.185258@beluga.mojam.com> Message-ID: 
                              
                              [Skip Montanaro, whose ship has finally come in!] > ... > Today I got an interesting "match" from Hotjobs.com's agent: > > ***Your Chicago Software agent yielded 1 jobs: > > 1. Vice President - Internet Technology > Playboy Enterprises Inc. > http://www.hotjobs.com/cgi-bin/job-show-mysql?J__PINDEX=J612497NR > ... > I wonder how committed they are to research? Go for it! All communication technologies are driven by the need for delivering porn (you surely don't think Ford Motor Company funded streaming media research <0.7 link>). This inspired me to look at http://www.playboy.com/. A very fancy, media-rich website, that appears to have been coded by hand in Notepad by monkeys -- but monkeys with an inate sense of Pythonic indentation: // this is browser detect thingy browser=0 if(document.images) { browser=1; } // this is status message function function stat(words) { if(browser==1) { top.window.status=words; } } It's possible that they're not beyond hope, although they seem to think that horizontal space is precious while vertical abundant. > Alas, they aren't looking for Python skills, ... Only because they haven't met you! Guido would surely love to see "Python Powered" on a soft-core porn portal 
                              
                              . send-python-dev-the-cyber-club-password-after-you-start-ly y'rs - tim From mwh21 at cam.ac.uk Sat Feb 3 10:51:16 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 03 Feb 2001 09:51:16 +0000 Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: Skip Montanaro's message of "Fri, 2 Feb 2001 20:09:45 -0600 (CST)" References: <14971.26729.54529.333522@beluga.mojam.com> Message-ID: 
                              
                              Skip Montanaro 
                              
                              writes: > Modules/Setup.local is getting zapped by some aspect of the build process. > Not sure by what step, but mine had lines I added to it a few days ago, and > nothing now. It should be treated as Modules/Setup used to be: initialize > it if it's absent, don't touch it if it's there. > > The distclean target looks like the culprit: > > distclean: clobber > -rm -f Makefile Makefile.pre buildno config.status config.log \ > config.cache config.h setup.cfg Modules/config.c \ > Modules/Setup Modules/Setup.local Modules/Setup.config > > I've been using it a lot lately to build from scratch, what with the new > Makefile and setup.py. Since Setup.local is ostensibly something a user > would edit manually and would never have useful content in it as > distributed, I don't think even distclean should zap it. Eh? Surely "make distclean" is what you invoke before you tar up the src directory of a release, and so certainly should remove Setup.local. To do builds from scratch easily do things like: $ cd python/dist/src $ mkdir build $ cd build $ ../configure && make and then blow away the ./build directory as needed. This still tends to leave .pycs in Lib if you run make test, so I tend to use lndir to acheive a similar effect. Cheers, M. PS: Good sigmonster. -- 6. Symmetry is a complexity-reducing concept (co-routines include subroutines); seek it everywhere. -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html From tim.one at home.com Sat Feb 3 11:44:35 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 05:44:35 -0500 Subject: [Python-Dev] insertdict slower? In-Reply-To: <14970.55362.332519.654243@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              [Jeremy Hylton] > I wanted to be sure that some other change to the dictionary code > didn't have the unintended consequence of slowing down insertdict. Have you looked at insertdict? Again, nothing has changed in it since 2.0, and it's a simple little function anyway. Here it is in its entirety: static void insertdict(register dictobject *mp, PyObject *key, long hash, PyObject *value) { PyObject *old_value; register dictentry *ep; ep = (mp->ma_lookup)(mp, key, hash); if (ep->me_value != NULL) { old_value = ep->me_value; ep->me_value = value; Py_DECREF(old_value); /* which **CAN** re-enter */ Py_DECREF(key); } else { if (ep->me_key == NULL) mp->ma_fill++; else Py_DECREF(ep->me_key); ep->me_key = key; ep->me_hash = hash; ep->me_value = value; mp->ma_used++; } } There's not even a loop. Unless Py_DECREF got a lot slower, there's nothing at all time-consuming in inserdict proper. > There is a real and measurable slowdown in MAL's DictCreation > microbenchmark, which needs to be explained somehow. insertdict > sounds more plausible than many other explanations. Given the code above, and that it hasn't changed at all, do you still think it's plausible? At this point I can only repeat my last msg: perhaps your profiler is mistakenly charging the time for this line: ep = (mp->ma_lookup)(mp, key, hash); to insertdict; perhaps the profiler is plain buggy; perhaps you didn't measure what you think you did; perhaps it's an accidental I-cache conflict; all *I* can be sure of is that it's not due to any change in insertdict 
                              
                              . try-the-icache-trick-you-may-get-lucky-ly y'rs - tim From mal at lemburg.com Sat Feb 3 12:03:46 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 03 Feb 2001 12:03:46 +0100 Subject: [Python-Dev] insertdict slower? References: 
                              
                              Message-ID: <3A7BE592.872AE4C1@lemburg.com> Tim Peters wrote: > > [Jeremy Hylton] > > I wanted to be sure that some other change to the dictionary code > > didn't have the unintended consequence of slowing down insertdict. > > Have you looked at insertdict? Again, nothing has changed in it since 2.0, > and it's a simple little function anyway. Here it is in its entirety: > > static void > insertdict(register dictobject *mp, PyObject *key, long hash, PyObject > *value) > { > PyObject *old_value; > register dictentry *ep; > ep = (mp->ma_lookup)(mp, key, hash); > if (ep->me_value != NULL) { > old_value = ep->me_value; > ep->me_value = value; > Py_DECREF(old_value); /* which **CAN** re-enter */ > Py_DECREF(key); > } > else { > if (ep->me_key == NULL) > mp->ma_fill++; > else > Py_DECREF(ep->me_key); > ep->me_key = key; > ep->me_hash = hash; > ep->me_value = value; > mp->ma_used++; > } > } > > There's not even a loop. Unless Py_DECREF got a lot slower, there's nothing > at all time-consuming in inserdict proper. > > > There is a real and measurable slowdown in MAL's DictCreation > > microbenchmark, which needs to be explained somehow. insertdict > > sounds more plausible than many other explanations. > > Given the code above, and that it hasn't changed at all, do you still think > it's plausible? At this point I can only repeat my last msg: perhaps your > profiler is mistakenly charging the time for this line: > > ep = (mp->ma_lookup)(mp, key, hash); > > to insertdict; perhaps the profiler is plain buggy; perhaps you didn't > measure what you think you did; perhaps it's an accidental I-cache conflict; > all *I* can be sure of is that it's not due to any change in insertdict > 
                              
                              . It doesn't have anything to do with icache conflicts or other esoteric magic at dye-level. The reason for the slowdown is that the benchmark uses integers as keys and these have to go through the whole rich compare machinery to find their way into the dictionary. Please see my other post on the subject -- I think we need an optimized API especially for checking for equality. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Sat Feb 3 12:13:43 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 03 Feb 2001 12:13:43 +0100 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include References: 
                              
                              Message-ID: <3A7BE7E7.5AA90731@lemburg.com> Tim Peters wrote: > > [MAL] > > ... > > Since iterators can define the order in which a data structure is > > traversed, this would also do away with the second (supposed) > > problem. > > Except we don't need iterators to do that. If anyone thought it was > important, they could change the existing PyDict_Next to force an ordering, > and then everything building on that would inherit it. So while I'm in > favor of better iteration schemes, I'm not in favor of overselling them (on > grounds that aren't unique to them). I'm just trying to sell iterators to bare us the pain of overloading the for-loop syntax just to get faster iteration over dictionaries. The idea is simple: put all the lookup, order and item building code into the iterator, have many of them, one for each flavour of values, keys, items and honeyloops, and then optimize the for-loop/iterator interaction to get the best performance out of them. There's really not much use in adding *one* special case to for-loops when there are a gazillion different needs to iterate over data structures, files, socket, ports, coffee cups, etc. > >> Sorry, but immutability has nothing to do with thread safety ... > > > Who said that an exception is raised ? > > I did 
                              
                              . > > > The method I posted on the mutability thread allows querying > > the current state just like you would query the availability > > of a resource. > > This? > > .mutable([flag]) -> integer > > If called without argument, returns 1/0 depending on > whether the object is mutable or not. When called with a > flag argument, sets the mutable state of the object to > the value indicated by flag and returns the previous flag > state. > > If I do: > > if object.mutable(): > object.mutate() > > in a threaded world, the certain (but erratic) outcome is that sometimes it > blows up: there's no guarantee that another thread doesn't sneak in and > *change* the mutability between the time object.mutable() returns 1 and > object.mutate() acts on a bad assumption. I know. That's why you would do this: lock = [] # we use the mutable state as lock indicator; initial state is mutable # try to acquire lock: while 1: prevstate = lock.mutable(0) if prevstate == 0: # was already locked continue elif prevstate == 1: # we acquired the lock break # release lock lock.mutable(1) > Same thing for: > > if resources.num_printers_available() > 0: > action_that_blows_up_if_no_printers_are_available > > in a threaded world. It's possible to build a thread-safe resource > acquisition protocol in either case, but that's really got nothing to do > with immutability or iterators (marking a thing immutable doesn't do any > good there unless you *also* build a protocol on top of it for communicating > state changes, blocking until one occurs, notifications with optional > timeouts, etc -- just doing object.mutable(1) is a threaded disaster in the > absence of a higher-level protocol guaranteeing that this won't go changing > the mutability state in the middle of some other thread's belief that it's > got the thing frozen; likewise for object.mutable(0) not stepping on some > other thread's belief that it's got permission to mutate). > > .mutable(flag) is *fine* for what it does, it's simply got nothing to do > with threads. Thread safety could *build* on it via coordinated use of a > threading.Sempahore (or moral equivalent), though. Ok... :) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Sat Feb 3 12:57:02 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 06:57:02 -0500 Subject: [Python-Dev] insertdict slower? In-Reply-To: <3A7BE592.872AE4C1@lemburg.com> Message-ID: 
                              
                              [MAL] > It doesn't have anything to do with icache conflicts or > other esoteric magic at dye-level. The reason for the slowdown > is that the benchmark uses integers as keys and these have to > go through the whole rich compare machinery to find their way into > the dictionary. But insertdict doesn't do any compares at all (besides C pointer comparison to NULL). There's more than one mystery here. The one I was addressing is why the profiler said *insertdict* got slower. Jeremy's profile did not give any reason to suspect that lookdict got slower (which is where the compares are); to the contrary, it said lookdict got 4.5% *faster* in 2.1. > Please see my other post on the subject -- I think we need > an optimized API especially for checking for equality. Quite possibly, but if Jeremy remains keen to help with investigating timing puzzles, he needs to figure out why his profiling approach is pointing him at the wrong functions. That has long-term value far beyond patching the regression du jour. it's-not-either/or-it's-both-ly y'rs -tim From mal at lemburg.com Sat Feb 3 13:23:54 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 03 Feb 2001 13:23:54 +0100 Subject: [Python-Dev] insertdict slower? References: 
                              
                              Message-ID: <3A7BF85A.FDCC7854@lemburg.com> Tim Peters wrote: > > [MAL] > > It doesn't have anything to do with icache conflicts or > > other esoteric magic at dye-level. The reason for the slowdown > > is that the benchmark uses integers as keys and these have to > > go through the whole rich compare machinery to find their way into > > the dictionary. > > But insertdict doesn't do any compares at all (besides C pointer comparison > to NULL). There's more than one mystery here. The one I was addressing is > why the profiler said *insertdict* got slower. Jeremy's profile did not > give any reason to suspect that lookdict got slower (which is where the > compares are); to the contrary, it said lookdict got 4.5% *faster* in 2.1. > > > Please see my other post on the subject -- I think we need > > an optimized API especially for checking for equality. > > Quite possibly, but if Jeremy remains keen to help with investigating timing > puzzles, he needs to figure out why his profiling approach is pointing him > at the wrong functions. That has long-term value far beyond patching the > regression du jour. > > it's-not-either/or-it's-both-ly y'rs -tim Ok, let's agree on "it's both" :) I was referring to the slowdown which shows up in the DictCreation benchmark. It uses bundles of these operations to check for dictionary creation speed: d1 = {} d2 = {} d3 = {} d4 = {} d5 = {} d1 = {1:2,3:4,5:6} d2 = {2:3,4:5,6:7} d3 = {3:4,5:6,7:8} d4 = {4:5,6:7,8:9} d5 = {6:7,8:9,10:11} Note that the number of inserted items is 3; the minimum size of the allocated table is 4. Apart from the initial allocation of the dictionary table, no further resizes are done. One of the micro-optimizations which I used in my patched Python version deals with these rather common situations: small dictionaries are inlined (up to a certain size) in the object itself rather than stored in a separatly malloced table. I found that a limit of 8 slots gives you the best ratio between performance boost and memory overhead. This is another area where Valdimir's pymalloc could help, since it favours small memory chunks. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Sat Feb 3 14:15:17 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 08:15:17 -0500 Subject: [Python-Dev] insertdict slower? In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Tim] > ... to the contrary, it said lookdict got 4.5% *faster* in 2.1. Ack, I was reading the wrong column. It actually said that lookdict went from 0.48 to 0.49 seconds, while insertdict went from 0.20 to 0.26. http://mail.python.org/pipermail/python-dev/2001-February/012428.html Whatever, the profile isn't pointing at things that make sense, and is pointing at things that don't. Then again, why anyone would believe any output from a computer program is beyond me 
                              
                              . needs-sleep-ly y'rs - tim PS: Sorry to say it, but rich comparisons have nothing to do with this either! Run your dict creation test under a debugger and watch it -- the rich compares never get called. The basic reason is that hash(i) == i for all Python ints i (except for -1, but you're not using that). So the hash codes in your dict creation test are never equal. But there's never a reason to call a "real compare" unless you hit a case where the hash codes *are* equal. The latter never happens, so neither does the former. The insert either finds an empty slot at once (& so returns immediately), or collides. But in the latter case, as soon as it sees that ep->me_hash != hash, it just moves on the next slot in the probe sequence; and so until it does find an empty slot. From mal at lemburg.com Sat Feb 3 14:47:20 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 03 Feb 2001 14:47:20 +0100 Subject: [Python-Dev] insertdict slower? References: 
                              
                              Message-ID: <3A7C0BE8.A0109F5D@lemburg.com> Tim Peters wrote: > > [Tim] > > ... to the contrary, it said lookdict got 4.5% *faster* in 2.1. > > Ack, I was reading the wrong column. It actually said that lookdict went > from 0.48 to 0.49 seconds, while insertdict went from 0.20 to 0.26. > > http://mail.python.org/pipermail/python-dev/2001-February/012428.html > > Whatever, the profile isn't pointing at things that make sense, and is > pointing at things that don't. > > Then again, why anyone would believe any output from a computer program is > beyond me 
                              
                              . Looks like Jeremy's machine has a problem or this is the result of different compiler optimizations. On my machine using the same compiler and optimization settings I get the following figure for DictCreation (2.1a1 vs. 2.0): DictCreation: 1869.35 ms 12.46 us +8.77% That's below noise level (+/-10%). > needs-sleep-ly y'rs - tim > > PS: Sorry to say it, but rich comparisons have nothing to do with this > either! Run your dict creation test under a debugger and watch it -- the > rich compares never get called. The basic reason is that hash(i) == i for > all Python ints i (except for -1, but you're not using that). So the hash > codes in your dict creation test are never equal. But there's never a > reason to call a "real compare" unless you hit a case where the hash codes > *are* equal. The latter never happens, so neither does the former. The > insert either finds an empty slot at once (& so returns immediately), or > collides. But in the latter case, as soon as it sees that ep->me_hash != > hash, it just moves on the next slot in the probe sequence; and so until it > does find an empty slot. Hmm, seemed like a natural explanation from looking at the code. So now we have two different explanations for a non-existing problem ;-) I'll rerun the benchmark for 2.1a2 and let you know of the findings. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From skip at mojam.com Sat Feb 3 16:04:08 2001 From: skip at mojam.com (Skip Montanaro) Date: Sat, 3 Feb 2001 09:04:08 -0600 (CST) Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: 
                              
                              References: <14971.26729.54529.333522@beluga.mojam.com> 
                              
                              Message-ID: <14972.7656.829356.566021@beluga.mojam.com> Michael> Eh? Surely "make distclean" is what you invoke before you tar Michael> up the src directory of a release, and so certainly should Michael> remove Setup.local. Yeah, I realize that now. I should probably have been executing make clobber. Michael> This still tends to leave .pycs in Lib if you run make test, so Michael> I tend to use lndir to acheive a similar effect. Make distclean doesn't remove the pyc's or Emacs backup files. Those omissions seem to be a bug. Makefile-meister Neal? Skip From barry at digicool.com Sat Feb 3 16:50:33 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Sat, 3 Feb 2001 10:50:33 -0500 Subject: [Python-Dev] Case sensitive import References: <0G8500859PMIQL@mta5.snfc21.pbi.net> 
                              
                              Message-ID: <14972.10441.479316.919937@anthem.wooz.org> >>>>> "TP" == Tim Peters 
                              
                              writes: TP> Don't thank me, thank Bill Gates for creating a wonderful TP> operating system where I get to ignore *all* the TP> 57-varieties-of-Unix build headaches. And thank goodness for Un*x, where I get to ignore all the 69 different varieties of The One True Operating System -- all Windows OSes are created equal, right? :) TP> BTW, I didn't grok the CVS argument. You don't change the TP> name of the directory, you change the name of the executable. TP> And the obvious name is obvious to me: python.exe 
                              
                              . Even a Un*x dweeb like myself would agree, if you have to change one of them, change the executable. I see no reason why on Un*x the build procedure couldn't drop a symlink from python.exe to python for backwards compatibility and convenience. -Barry From barry at digicool.com Sat Feb 3 16:55:38 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Sat, 3 Feb 2001 10:55:38 -0500 Subject: [Python-Dev] Case sensitive import. References: 
                              
                              
                              Message-ID: <14972.10746.34425.26722@anthem.wooz.org> >>>>> "TP" == Tim Peters 
                              
                              writes: TP> So a retroactive -1 on this last-second patch -- and a waaaaay TP> retroactive -1 on Python's behavior on Windows too. So, let's tease out what the Right solution would be, and then see how close or if we can get there for 2.1. I've no clue what behavior Mac and Windows users would /like/ to see -- what would be most natural for them? OTOH, I like the Un*x behavior and I think I'd want to see platforms like Cygwin and MacOSX-on-non-HFS+ get as close to that as possible. Is it better to have uniform behavior across all platforms (modulo places like some Windows network fs's where that may not be possible)? Should Python's import semantics be identical across all platforms? OTOH, this is where the rubber meets the road so to speak, so some incompatibilities may be impossible to avoid. And what about Jython? -Barry From Jason.Tishler at dothill.com Sat Feb 3 17:02:58 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Sat, 3 Feb 2001 11:02:58 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14971.1284.474393.800832@anthem.wooz.org>; from barry@digicool.com on Fri, Feb 02, 2001 at 02:05:40PM -0500 References: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              <14970.64979.584372.4671@anthem.wooz.org> <14971.572.369273.721571@anthem.wooz.org> <14971.1284.474393.800832@anthem.wooz.org> Message-ID: <20010203110258.N1800@dothill.com> Barry, On Fri, Feb 02, 2001 at 02:05:40PM -0500, Barry A. Warsaw wrote: > Patch passes regr test and import getpass on Linux, so I'm prepared to > commit it for 2.1a2. Y'all are going to have to stress test it on > other platforms. This patch works properly under Cygwin too. The regression tests yield the same results as before and "import getpass" now behaves the same as on UNIX. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From fredrik at effbot.org Sat Feb 3 17:07:24 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Sat, 3 Feb 2001 17:07:24 +0100 Subject: [Python-Dev] Case sensitive import References: <0G8500859PMIQL@mta5.snfc21.pbi.net>
                              
                              <14972.10441.479316.919937@anthem.wooz.org> Message-ID: <001201c08dfb$668f9f10$e46940d5@hagrid> barry wrote: > Even a Un*x dweeb like myself would agree, if you have to change one > of them, change the executable. I see no reason why on Un*x the build > procedure couldn't drop a symlink from python.exe to python for > backwards compatibility and convenience. real Unix users will probably not care, but what do you think the Linux kiddies will think about Python when they find evil-empire- style executables in the build directory? Cheers /F From nas at arctrix.com Sat Feb 3 18:21:24 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Sat, 3 Feb 2001 09:21:24 -0800 Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: <14972.7656.829356.566021@beluga.mojam.com>; from skip@mojam.com on Sat, Feb 03, 2001 at 09:04:08AM -0600 References: <14971.26729.54529.333522@beluga.mojam.com> 
                              
                              <14972.7656.829356.566021@beluga.mojam.com> Message-ID: <20010203092124.A30977@glacier.fnational.com> On Sat, Feb 03, 2001 at 09:04:08AM -0600, Skip Montanaro wrote: > Make distclean doesn't remove the pyc's or Emacs backup files. Those > omissions seem to be a bug. Makefile-meister Neal? Yup, its a bug. Here is the story now: clean all object files and compilied .py files clobber everything clean does plus executables, libraries, and tag files distclean: everything clobber does plus makefiles, generated .c files, configure files, Setup files, and lots of other crud that make did not actually generate (core, *~, *.orig, etc). I'm not sure this matches what people expect these targets to do. I expect that "make clean" will be less used now that the makefile usually does the right thing. I removed Makefile.in, Demo/Makefile, Grammar/Makefile.in, Include/Makefile, Lib/Makefile, Misc/Makefile, Modules/Makefile.pre.in, Objects/Makefile.in, Parser/Makefile.in, and Python/Makefile.in as they are no longer used. Neil From tim.one at home.com Sat Feb 3 21:15:31 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 15:15:31 -0500 Subject: [Python-Dev] Case sensitive import In-Reply-To: <14972.10441.479316.919937@anthem.wooz.org> Message-ID: 
                              
                              [Barry A. Warsaw] > And thank goodness for Un*x, where I get to ignore all the 69 > different varieties of The One True Operating System -- all Windows > OSes are created equal, right? :) Yes, and every one of them perfect, albeit each in its own unique way 
                              
                              . I wouldn't wish it on anyone, but, in the end, even you would have rather done the Win64 port from scratch than try to close the HPUX bugs still open. Heh heh. > Even a Un*x dweeb like myself would agree, if you have to change one > of them, change the executable. I see no reason why on Un*x the build > procedure couldn't drop a symlink from python.exe to python for > backwards compatibility and convenience. Of course I wasn't serious about that. To judge from a decade of Unix-newbie postings to c.l.py, we should rename the executable there to phyton. That's what they think the language is named anyway. always-eager-to-aid-my-unixoid-brethren-ly y'rs - tim From bckfnn at worldonline.dk Sat Feb 3 21:15:38 2001 From: bckfnn at worldonline.dk (Finn Bock) Date: Sat, 03 Feb 2001 20:15:38 GMT Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14972.10746.34425.26722@anthem.wooz.org> References: 
                              
                              
                              <14972.10746.34425.26722@anthem.wooz.org> Message-ID: <3a7c66be.37678038@smtp.worldonline.dk> [Barry] >So, let's tease out what the Right solution would be, and then see how >close or if we can get there for 2.1. I've no clue what behavior Mac >and Windows users would /like/ to see -- what would be most natural >for them? OTOH, I like the Un*x behavior and I think I'd want to see >platforms like Cygwin and MacOSX-on-non-HFS+ get as close to that as >possible. > >Is it better to have uniform behavior across all platforms (modulo >places like some Windows network fs's where that may not be possible)? >Should Python's import semantics be identical across all platforms? >OTOH, this is where the rubber meets the road so to speak, so some >incompatibilities may be impossible to avoid. > >And what about Jython? Jython only does a File().exists() (which is similar to a stat()). So on WinNT, jython is behaving wrongly: Jython 2.0 on java1.3.0 (JIT: null) Type "copyright", "credits" or "license" for more information. >>> import stringio >>> stringio.__file__ 'I:\\java\\Jython.CVS\\Lib\\stringio.py' >>> Yet I can't remember any bug reports where this have caused problems. regards, finn From hughett at mercur.uphs.upenn.edu Sat Feb 3 21:40:22 2001 From: hughett at mercur.uphs.upenn.edu (Paul Hughett) Date: Sat, 3 Feb 2001 15:40:22 -0500 Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: <20010203092124.A30977@glacier.fnational.com> (message from Neil Schemenauer on Sat, 3 Feb 2001 09:21:24 -0800) References: <14971.26729.54529.333522@beluga.mojam.com> 
                              
                              <14972.7656.829356.566021@beluga.mojam.com> <20010203092124.A30977@glacier.fnational.com> Message-ID: <200102032040.PAA04977@mercur.uphs.upenn.edu> Neil Schemenauer says: > Here is the story now: > clean > all object files and compilied .py files > clobber > everything clean does plus executables, libraries, and > tag files > distclean: > everything clobber does plus makefiles, generated .c > files, configure files, Setup files, and lots of other > crud that make did not actually generate (core, *~, > *.orig, etc). I usually use two or three targets, as follows: clean Delete all the objects, executables, libraries, tag files, etc that are normally generated by make all. Don't touch the Makefile, etc. that are generated by ./configure. This is more or less Neil's clean and clobber taken together; I've never had much need to delete object files but not executables. distclean Delete all the files that didn't come with the distribution tarball; that is, all the files that make clean removes, plus the Makefile, config.cache, etc. However, try not to clobber random files and notes made by the user and not closely related to the package. realclean Delete all the files that could be regenerated from other files, even if they're normally included in the distribution tarball; e.g configure, the PDF file containing the installation instructions, etc. This target is unnecessary in many packages. I'm not going to try to argue that this is the only Right Way(tm), but it has worked well for me, and gives a reasonably clear criterion for deciding which file should get deleted at each level. Paul Hughett From fredrik at pythonware.com Sat Feb 3 21:45:55 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sat, 3 Feb 2001 21:45:55 +0100 Subject: [Python-Dev] Case sensitive import. References: 
                              
                              
                              <14972.10746.34425.26722@anthem.wooz.org> <3a7c66be.37678038@smtp.worldonline.dk> Message-ID: <00ba01c08e22$4f48b090$e46940d5@hagrid> finn wrote: > Jython only does a File().exists() (which is similar to a stat()). So on > WinNT, jython is behaving wrongly: > > Jython 2.0 on java1.3.0 (JIT: null) > Type "copyright", "credits" or "license" for more information. > >>> import stringio > >>> stringio.__file__ > 'I:\\java\\Jython.CVS\\Lib\\stringio.py' > >>> > > Yet I can't remember any bug reports where this have caused problems. maybe that because it's easier for a Jython programmer to test his new library under CPython before releasing it to the world, than it is for a CPython programmer on Windows to test his library on a Unix box... yes-i've-been-bitten-by-this--ack-in-the-old-days-ly yrs /F From fredrik at effbot.org Sat Feb 3 21:55:05 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Sat, 3 Feb 2001 21:55:05 +0100 Subject: [Python-Dev] Setup.local is getting zapped References: <14971.26729.54529.333522@beluga.mojam.com> 
                              
                              <14972.7656.829356.566021@beluga.mojam.com> <20010203092124.A30977@glacier.fnational.com> <200102032040.PAA04977@mercur.uphs.upenn.edu> Message-ID: <00c401c08e23$96b44510$e46940d5@hagrid> > Neil wrote: > Here is the story now: why not just keep the old behaviour? > clean > all object files and compilied .py files was: remove all junk, such as core files, emacs backup files, patch remains, pyc/pyo files, etc. > clobber > everything clean does plus executables, libraries, and > tag files was: clean plus executables, libraries, object files, and config stuff. use before reconfiguring/rebuilding. > > distclean: > > everything clobber does plus makefiles, generated .c > > files, configure files, Setup files, and lots of other > > crud that make did not actually generate (core, *~, > > *.orig, etc). was: clobber plus everything that shouldn't be in a distribution archive. use before tarring/zipping things up for distribution. from your description, the main difference seems to be that you've moved the "crud" part from "clean" to "distclean"... Cheers /F From tim.one at home.com Sat Feb 3 22:08:08 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 16:08:08 -0500 Subject: [Python-Dev] insertdict slower? In-Reply-To: <3A7C0BE8.A0109F5D@lemburg.com> Message-ID: 
                              
                              [MAL] > Looks like Jeremy's machine has a problem or this is the result > of different compiler optimizations. Are you using an AMD chip? They have different cache behavior than the Pentium I expect Jeremy is using. Different flavors of Pentium also have different cache behavior. If the slowdown his box reports in insertdict is real (which I don't know), cache effects are the most likely cause (given that the code has not changed at all). > On my machine using the same compiler and optimization settings > I get the following figure for DictCreation (2.1a1 vs. 2.0): > > DictCreation: 1869.35 ms 12.46 us +8.77% > > That's below noise level (+/-10%). Jeremy saw "about 15%". So maybe that's just *loud* noise 
                              
                              . noise-should-be-measured-in-decibels-ly y'rs - tim From tim.one at home.com Sat Feb 3 22:08:18 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 16:08:18 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <3A7BE7E7.5AA90731@lemburg.com> Message-ID: 
                              
                              [MAL] > I'm just trying to sell iterators to bare us the pain of overloading > the for-loop syntax just to get faster iteration over dictionaries. > > The idea is simple: put all the lookup, order and item building > code into the iterator, have many of them, one for each flavour > of values, keys, items and honeyloops, and then optimize the > for-loop/iterator interaction to get the best performance out > of them. > > There's really not much use in adding *one* special case to > for-loops when there are a gazillion different needs to iterate > over data structures, files, socket, ports, coffee cups, etc. They're simply distinct issues to me. Whether people want special syntax for iterating over dicts is (to me) independent of how the iteration protocol works. Dislike of the former should probably be stabbed into Ping's heart 
                              
                              . > I know. That's why you would do this: > > lock = [] > # we use the mutable state as lock indicator; initial state is mutable > > # try to acquire lock: > while 1: > prevstate = lock.mutable(0) > if prevstate == 0: > # was already locked > continue > elif prevstate == 1: > # we acquired the lock > break > > # release lock > lock.mutable(1) OK, in the lingo of the field, you're using .mutable(0) as a test-and-clear operation, and building a spin lock on top of it in "the usual" way. In that case the acquire code can be much simpler: while not lock.mutable(0): pass Same thing. I agree then that has basic lock semantics (relying indirectly on the global interpreter lock to make .mutable() calls atomic). But Python-level spin locks are thoroughly impractical: a waiting thread T will use 100% of its timeslice at 100% CPU utilization waiting for the lock, with no chance of succeeding (the global interpreter lock blocks all other threads while T is spinning, so no other thread *can* release the lock for the duration -- the spinning is futile). The performance characteristics would be horrid. So while "a lock", it's not a *useful* lock for threading. You got something against Python's locks 
                              
                              ? every-proposal-gets-hijacked-to-some-other-end-ly y'rs - tim From guido at digicool.com Sat Feb 3 22:10:56 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 03 Feb 2001 16:10:56 -0500 Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: Your message of "Sat, 03 Feb 2001 21:55:05 +0100." <00c401c08e23$96b44510$e46940d5@hagrid> References: <14971.26729.54529.333522@beluga.mojam.com> 
                              
                              <14972.7656.829356.566021@beluga.mojam.com> <20010203092124.A30977@glacier.fnational.com> <200102032040.PAA04977@mercur.uphs.upenn.edu> <00c401c08e23$96b44510$e46940d5@hagrid> Message-ID: <200102032110.QAA13074@cj20424-a.reston1.va.home.com> > > Neil wrote: > > > Here is the story now: Effbot wrote: > why not just keep the old behaviour? Agreed. Unless there's a GNU guideline somewhere. > > clean > > all object files and compilied .py files > > was: remove all junk, such as core files, emacs backup files, > patch remains, pyc/pyo files, etc. This also always removed the .o files. > > clobber > > everything clean does plus executables, libraries, and > > tag files > > was: clean plus executables, libraries, object files, and config > stuff. use before reconfiguring/rebuilding. > > > > distclean: > > > everything clobber does plus makefiles, generated .c > > > files, configure files, Setup files, and lots of other > > > crud that make did not actually generate (core, *~, > > > *.orig, etc). > > was: clobber plus everything that shouldn't be in a distribution > archive. use before tarring/zipping things up for distribution. > > from your description, the main difference seems to be that you've > moved the "crud" part from "clean" to "distclean"... --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Sat Feb 3 23:24:51 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 17:24:51 -0500 Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14970.60750.570192.452062@beluga.mojam.com> Message-ID: 
                              
                              > Fredrik> what's the point? doesn't from-import already do > Fredrik> exactly that on C extensions? [Skip Montanaro] > Consider os. At one point it does "from posix import *". Okay, which > symbols now in its local namespace came from posix and which from its > own devices? It's a lot easier to do > > from posix import __all__ as _all > __all__.extend(_all) > del _all > > than to muck about importing posix, looping over its dict, then > incorporating what it finds. > > It also makes things a bit more consistent for introspective tools. I'm afraid I find it hard to believe people will *keep* C-module __all__ lists in synch with the code as the years go by. If we're going to do this, how about adding code to Py_InitModule4 that sucks the non-underscore names out of its PyMethodDef argument and automagically builds an __all__ attr? Then nothing ever needs to be fiddled by hand for C modules. but-unsure-i-like-__all__-at-all-ly y'rs - tim From fdrake at acm.org Sat Feb 3 23:22:00 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Sat, 3 Feb 2001 17:22:00 -0500 (EST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: 
                              
                              References: <14970.60750.570192.452062@beluga.mojam.com> 
                              
                              Message-ID: <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> Tim Peters writes: > I'm afraid I find it hard to believe people will *keep* C-module __all__ > lists in synch with the code as the years go by. If we're going to do this, > how about adding code to Py_InitModule4 that sucks the non-underscore names > out of its PyMethodDef argument and automagically builds an __all__ attr? > Then nothing ever needs to be fiddled by hand for C modules. I don't think adding __all__ to C modules makes sense. If you want the equivalent for a module that doesn't have an __all__, you can compute it easily enough. Adding it when it isn't useful is a maintenance problem that can be avoided easily enough. -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From skip at mojam.com Sun Feb 4 00:01:01 2001 From: skip at mojam.com (Skip Montanaro) Date: Sat, 3 Feb 2001 17:01:01 -0600 (CST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: 
                              
                              References: <14970.60750.570192.452062@beluga.mojam.com> 
                              
                              Message-ID: <14972.36269.845348.280744@beluga.mojam.com> Tim> I'm afraid I find it hard to believe people will *keep* C-module Tim> __all__ lists in synch with the code as the years go by. If we're Tim> going to do this, how about adding code to Py_InitModule4 that Tim> sucks the non-underscore names out of its PyMethodDef argument and Tim> automagically builds an __all__ attr? Then nothing ever needs to Tim> be fiddled by hand for C modules. The way it works now is that the module author inserts a call to _PyModuleCreateAllList at or near the end of the module's init func /* initialize module's __all__ list */ _PyModule_CreateAllList(d); that initializes and populates __all__ based on the keys in the module's dict. Unlike having to manually maintain __all__, I think this solution is fairly un-onerous (one-time change). Again, my assumption is that all non-underscore prefixed symbols in a module's dict will be exported. If this isn't true, the author can simply delete any elements from __all__ after the call to _PyModule_CreateAllList. This functionality can't be subsumed by Py_InitModule4 because the author is allowed to insert values into the module dict after that call (see posixmodule.c for a significant example of this). _PyModule_CreateAllList is currently defined in modsupport.c (not checked in yet) as /* helper function to create __all__ from an extension module's dict */ int _PyModule_CreateAllList(PyObject *d) { PyObject *v, *k, *s; unsigned int i; int res; v = PyList_New(0); if (v == NULL) return -1; res = 0; if (!PyDict_SetItemString(d, "__all__", v)) { k = PyDict_Keys(d); if (k == NULL) res = -1; else { for (i = 0; res == 0 && i < PyObject_Length(k); i++) { s = PySequence_GetItem(k, i); if (s == NULL) res = -1; else { if (PyString_AsString(s)[0] != '_') if (PyList_Append(v, s)) res = -1; Py_DECREF(s); } } } } Py_DECREF(v); return res; } I don't know (nor much care - you guys decide) if it's named with or without a leading underscore. I view it as a more-or-less internal function, but one that many C extension modules will call (guess that might make it not internal). I haven't written a doc blurb for the API manual yet. Skip From skip at mojam.com Sun Feb 4 00:03:20 2001 From: skip at mojam.com (Skip Montanaro) Date: Sat, 3 Feb 2001 17:03:20 -0600 (CST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> References: <14970.60750.570192.452062@beluga.mojam.com> 
                              
                              <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> Message-ID: <14972.36408.800070.656541@beluga.mojam.com> Fred> I don't think adding __all__ to C modules makes sense. If you Fred> want the equivalent for a module that doesn't have an __all__, you Fred> can compute it easily enough. Adding it when it isn't useful is a Fred> maintenance problem that can be avoided easily enough. I thought I answered this question already when Fredrik asked it. In os.py, to build its __all__ list based upon the myriad different sets of symbols it might have after it's fancy footwork importing from various os-dependent modules, I think it's easiest to rely on those modules telling os what it should export. Skip From barry at digicool.com Sun Feb 4 00:43:37 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Sat, 3 Feb 2001 18:43:37 -0500 Subject: [Python-Dev] Waaay off topic, but I felt I had to share this... References: <14971.12207.566272.185258@beluga.mojam.com> 
                              
                              Message-ID: <14972.38825.231522.939983@anthem.wooz.org> >>>>> "TP" == Tim Peters 
                              
                              writes: TP> This inspired me to look at http://www.playboy.com/. A very TP> fancy, media-rich website, that appears to have been coded by TP> hand in Notepad by monkeys -- but monkeys with an inate sense TP> of Pythonic indentation: There goes Tim, browsing the Playboy site just for the JavaScript. Honest. -Barry From thomas at xs4all.net Sun Feb 4 01:42:09 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sun, 4 Feb 2001 01:42:09 +0100 Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14972.36269.845348.280744@beluga.mojam.com>; from skip@mojam.com on Sat, Feb 03, 2001 at 05:01:01PM -0600 References: <14970.60750.570192.452062@beluga.mojam.com> 
                              
                              <14972.36269.845348.280744@beluga.mojam.com> Message-ID: <20010204014209.Y962@xs4all.nl> On Sat, Feb 03, 2001 at 05:01:01PM -0600, Skip Montanaro wrote: > Tim> I'm afraid I find it hard to believe people will *keep* C-module > Tim> __all__ lists in synch with the code as the years go by. If we're > Tim> going to do this, how about adding code to Py_InitModule4 that > Tim> sucks the non-underscore names out of its PyMethodDef argument and > Tim> automagically builds an __all__ attr? Then nothing ever needs to > Tim> be fiddled by hand for C modules. > The way it works now is that the module author inserts a call to > _PyModuleCreateAllList at or near the end of the module's init func > /* initialize module's __all__ list */ > _PyModule_CreateAllList(d); Regardless of the use of this __all__ for C modules, this function has the wrong name. If it's intended a real part of the API (and it should be, if you want modules to use it) it shouldn't have a leading underscore. As for the debate on the usefulness, I don't care much either way -- I don't write or maintain that many C modules (exactly 0, in fact :-) and though I see the logic in placing the responsibility with the C module writers, I also know I greatly prefer writing and maintaining Python modules than C modules. Placing the responsibility in the (Python) module doing the 'from .. import *' sounds like a good enough idea to me. I'm also not sure what other examples of its use are out there, other than os.py. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From thomas at xs4all.net Sun Feb 4 01:44:09 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sun, 4 Feb 2001 01:44:09 +0100 Subject: [Python-Dev] Waaay off topic, but I felt I had to share this... In-Reply-To: <14972.38825.231522.939983@anthem.wooz.org>; from barry@digicool.com on Sat, Feb 03, 2001 at 06:43:37PM -0500 References: <14971.12207.566272.185258@beluga.mojam.com> 
                              
                              <14972.38825.231522.939983@anthem.wooz.org> Message-ID: <20010204014409.Z962@xs4all.nl> On Sat, Feb 03, 2001 at 06:43:37PM -0500, Barry A. Warsaw wrote: > >>>>> "TP" == Tim Peters 
                              
                              writes: > TP> This inspired me to look at http://www.playboy.com/. A very > TP> fancy, media-rich website, that appears to have been coded by > TP> hand in Notepad by monkeys -- but monkeys with an inate sense > TP> of Pythonic indentation: > There goes Tim, browsing the Playboy site just for the JavaScript. Honest. Well, the sickest part is how I read Skip's post, and thought "Oh god, Tim is going to reply to this, I'm sure of it". And I was right :) Lets-see-if-he-gets-the-hidden-meaning-of-*this*-post-ly y'rs, -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From thomas at xs4all.net Sun Feb 4 03:01:13 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sun, 4 Feb 2001 03:01:13 +0100 Subject: [Python-Dev] Nested scopes. Message-ID: <20010204030113.A962@xs4all.nl> So I've been reading python-list and pondering the nested scope issue. I even read the PEP (traded Sleep(tm) for it :). And I'm thinking we can fix the entire nested-scopes-in-combination-with-local-namespace-modifying-stmts issue by doing a last-ditch effort when the codeblock creates a nested scope _and_ uses 'from-import *' or 'exec'. Looking at the noise on python-list I think we should really try to do that. Making 'from foo import *' and 'exec' work in the absense of nested scopes might not be enough, given that a simple 'lambda: 0' statement would suffice to break code again. Here's what I think could work: In absense of 'exec' or 'import*' in a local namespace, compile it as currently. In absense of a nested scope, compile it as 2.0 did, allowing exec and import*. In case both exist, resolve all names local to the nested function as local names, but generate LOAD_PLEASE (or whatever) opcodes that do a top-down search of all parent scopes at runtime. I'm sure it would mean a lot of confusion if people mix 'from foo import *' and a nested scope that intends to use a global, but ends up using a name imported from foo, but I'm also sure it will create a lot less confusion than just breaking a lot of code, for no apparent reason (because that is and will be how people see it.) I also realize implementing the LOAD_PLEASE opcode isn't that straightforward. It requires a pointer from each nested scope to its parent scope (I'm not sure if those exist yet) and it also requires a way to search a function-local namespace (but that should be possible, since that is what LOAD_NAME does.) It would be terribly inefficient (relatively speaking), but so is the use of from-import* in 2.0, so I don't really consider that an issue. The only thing I'm really not sure of is why this hasn't already been done; is there a strong fundamental argument against this aproach other than the (very valid) issue of 'too many features, too little time' ? I still have to grok the nested-scope changes to the compiler and interpreter, so I might have overlooked something. And finally, if this change is going to happen it has to happen before Python 2.1, preferably before 2.1b1. If we ship 2.1-final with the current restrictions, or even the toned-down restrictions of "no import*/exec near nested scopes", it will probably not matter for 2.2, one way or another. Willing-to-write-it-if-given-an-extra-alpha-to-do-it-ly y'rs, -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Sun Feb 4 04:33:48 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 22:33:48 -0500 Subject: [Python-Dev] Waaay off topic, but I felt I had to share this... In-Reply-To: <20010204014409.Z962@xs4all.nl> Message-ID: 
                              
                              [Barry A. Warsaw] > There goes Tim, browsing the Playboy site just for the > JavaScript. Honest. Well, it's not like they had many floating-point numbers to ogle! I like 'em best when the high-order mantissa bits are all perky and regular, standing straight up, then go monster insane in the low-order bits, so you can't guess *what* bit might come next! Man, that's hot. Top it off witn an exponent field with lots of ones, and you don't even need any oil. Can't say I've got a preference for sign bits, though -- zero and one can both be saucy treats. Zero is more of a tease, so I guess it depends on the mood. But they didn't have anything like that, just boring old "money doubles", like 29.95. What's up with that? I mean the low-order bits are all like 0x33. Do I have to do *all* the work, while it just *sits* there nagging "3, 3, 3, 3, ..., crank me out forever, big poppa pump, but that's all you're ever gonna get!"? So I settled for the JavaStrip. a-real-man-takes-what-he-can-get-ly y'rs - tim From ping at lfw.org Sun Feb 4 05:30:11 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 3 Feb 2001 20:30:11 -0800 (PST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Sat, 3 Feb 2001, Tim Peters wrote: > They're simply distinct issues to me. Whether people want special syntax > for iterating over dicts is (to me) independent of how the iteration > protocol works. Dislike of the former should probably be stabbed into > Ping's heart 
                              
                              . Ow! Hey. :) We have shorthand like x[k] for spelling x.__getitem__[k]; why not shorthand like 'for k:v in x: ...' for spelling 'iter = x.__iteritems__(); while 1: k, v = iter() ...'? Hmm. What is the issue really with? - the key:value syntax suggested by Guido (i like it quite a lot) - the existence of special __iter*__ methods (seems natural to me; this is how we customize many operators on instances already) - the fact that 'for k:v' checks __iteritems__, __iter__, items, and __getitem__ (it *has* to check all of these things if it's going to play nice with existing mappings and sequences) - or something else? I'm not actually that clear on what the general feeling is about this PEP. Moshe seems to be happy with the first part but not the rest; Tim, do you have a similar position? Eric and Greg both disagreed with Moshe's counter-proposal; does that mean you like the original, or that you would rather do something different altogether? Moshe Zadka wrote: > dict.iteritems() could return not an iterator, but a magical object > whose iterator is the requested iterator. Ditto itervalues(), iterkeys() Seems like too much work to me. I'd rather just have the object produce a straight iterator. (By 'iterator' i mean an ordinary callable object, nothing too magical.) If there are unusual cases where you want to iterate over an object in several different ways i suppose they can create pseudo-sequences in the manner you described, but i think we want to make the most common case (iterating over the object itself) very easy. That is, just implement __iter__ and have it produce a callable. Marc A. Lemburg wrote: > The idea is simple: put all the lookup, order and item building > code into the iterator, have many of them, one for each flavour > of values, keys, items and honeyloops, and then optimize the > for-loop/iterator interaction to get the best performance out > of them. > > There's really not much use in adding *one* special case to > for-loops when there are a gazillion different needs to iterate > over data structures, files, socket, ports, coffee cups, etc. I couldn't tell which way you were trying to argue here. Are you in favour of the general flavour of PEP 234 or did you have in mind something different? Your first paragraph above seems to describe what 234 does. -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From esr at thyrsus.com Sun Feb 4 05:46:50 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sat, 3 Feb 2001 23:46:50 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: 
                              
                              ; from ping@lfw.org on Sat, Feb 03, 2001 at 08:30:11PM -0800 References: 
                              
                              
                              Message-ID: <20010203234650.A4133@thyrsus.com> Ka-Ping Yee 
                              
                              : > I'm not actually that clear on what the general feeling is about > this PEP. Moshe seems to be happy with the first part but not > the rest; Tim, do you have a similar position? Eric and Greg both > disagreed with Moshe's counter-proposal; does that mean you like > the original, or that you would rather do something different > altogether? I haven't yet heard a proposal that I find compelling. And, I have to admit, I've grown somewhat confused about the alternatives on offer. -- 
                              Eric S. Raymond Of all tyrannies, a tyranny exercised for the good of its victims may be the most oppressive. It may be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end, for they do so with the approval of their consciences. -- C. S. Lewis From jafo at tummy.com Sun Feb 4 05:50:15 2001 From: jafo at tummy.com (Sean Reifschneider) Date: Sat, 3 Feb 2001 21:50:15 -0700 Subject: [Python-Dev] Re: Python 2.1 alpha 2 released In-Reply-To: <14971.17735.263154.15769@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Fri, Feb 02, 2001 at 06:39:51PM -0500 References: <14971.17735.263154.15769@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010203215015.B29866@tummy.com> On Fri, Feb 02, 2001 at 06:39:51PM -0500, Jeremy Hylton wrote: >The release is currently available from SourceForge and will also be My SRPM is available at: ftp://ftp.tummy.com/pub/tummy/RPMS/SRPMS/ To turn it into a binary RPM for your rpm-based system, run "rpm --rebuild python-2.1a2-1tummy.src.rpm", get a cup of coffee, and then install the resulting binary RPMs (probably under "/usr/src/redhat/RPMS/i386"). Enjoy, Sean -- What no spouse of a programmer can ever understand is that a programmer is working when he's staring out the window. Sean Reifschneider, Inimitably Superfluous 
                              
                              tummy.com - Linux Consulting since 1995. Qmail, KRUD, Firewalls, Python From tim.one at home.com Sun Feb 4 07:42:26 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 01:42:26 -0500 Subject: [Python-Dev] RE: [Python-checkins] CVS: python/dist/src/Modules _testmodule.c,NONE,1.1 In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Jack Jansen] > Is "_test" a good choice of name for this module? It feels a bit > too generic, isn't something like _test_api (or _test_python_c_api) > better? Note that I renamed all this stuff, from _testXXX to _testcapiXXX, but after 2.1a2 was released. better-late-than-early-ly y'rs - tim From tim.one at home.com Sun Feb 4 08:06:21 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 02:06:21 -0500 Subject: [Python-Dev] A word from the author (was "pymalloc", was "fun", was "2.1 slowe r than 2.0") In-Reply-To: <4C99842BC5F6D411A6A000805FBBB199051F5B@ge0057exch01.micro.lucent.com> Message-ID: 
                              
                              [Vladimir Marangozov] Hi Vladimir! It's wonderful to see you here again. We had baked a cake for your return, but it's been so long I'm afraid I ate it 
                              
                              . Help us out a little more, briefly. The last time you mentioned obmalloc on Python-Dev was: Date: Fri, 8 Sep 2000 18:23:13 +0200 (CEST) Subject: [Python-Dev] 2.0 Optimization & speed > ... > The only reason I've postponed my obmalloc patch is that I > still haven't provided an interface which allows evaluating > it's impact on the mem size consumption. Still a problem in your eyes? In my eyes mem size was something most people would evaluate via their system-specific process monitoring tools, and they wouldn't believe what we said about it anyway <0.9 wink>. Then the last thing mentioned in the patch http://sourceforge.net/patch/?func=detailpatch&patch_id=101104& group_id=5470 was 2000-Aug-12 13:31: > Status set to Postponed. > > Although promising, this hasn't enjoyed much user testing for the > 2.0 time frame (partly because of the lack of an introspective > Python interface which can't be completed in time according to > the release schedule). But at that time it had been tested by more Python-Dev'ers than virtually any other patch in history (yes, I think two may still be the record <0.7 wink>), and nobody else was *asking* for an introspective interface -- they were just timing stuff, and looking at top/wintop/whatever. Now you seem much less hesitant, but still holding back: > Because the risk (long-term) is kind of unknown. I'll testify that the long-term risk of *any* patch is kind of unknown, if that will help. > ... > I'd say, opt-in for 2.1. No risk, enables profiling. Good. > My main reservation is about thread safety from extensions (but > this could be dealt with at a later stage) I expect we'll have to do the dance of evaluating it with and without locks regardless -- we keep pretending that GregS will work on free-threading sometime *this* millennium now 
                              
                              . BTW, obmalloc has some competition. Hans Boehm popped up on c.l.py last week, transparently trying to seduce Neil Schemenauer into devoting his life to making the BDW collector make Python's refcounting look like a cheap Dutch trick 
                              
                              : http://www.deja.com/getdoc.xp?AN=722453837&fmt=text you-miss-so-much-when-you're-away-ly y'rs - tim From tim.one at home.com Sun Feb 4 09:13:29 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 03:13:29 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14972.10746.34425.26722@anthem.wooz.org> Message-ID: 
                              
                              [Tim] > So a retroactive -1 on this last-second patch -- and a waaaaay > retroactive -1 on Python's behavior on Windows too. [Barry A. Warsaw] > So, let's tease out what the Right solution would be, and then > see how close or if we can get there for 2.1. I've no clue what > behavior Mac and Windows users would /like/ to see -- what would > be most natural for them? Nobody knows -- I don't think "they've" ever been asked. All *developers* want Unix semantics (keep going until finding an exact match -- that's what Steven's patch did). That's not good enough for Windows because of case-destroying network file systems and case-destroying old tools, but that + PYTHONCASEOK (stop on the first match of any kind) is good enough for Windows in my experience. > OTOH, I like the Un*x behavior Of course you do -- you're a developer when you're not a bass player 
                              
                              . No developer wants "file" to have 16 distinct potential meanings. > and I think I'd want to see platforms like Cygwin and MacOSX-on- > non-HFS+ get as close to that as possible. Well, MacOSX-on-non-HFS+ *is* Unix, right? So that should take care of itself (ya, right). I don't understand what Cygwin does; here from a Cygwin bash shell session: tim at fluffy ~ $ touch abc tim at fluffy ~ $ touch ABC tim at fluffy ~ $ ls abc tim at fluffy ~ $ wc AbC 0 0 0 AbC tim at fluffy ~ $ ls A* ls: A*: No such file or directory tim at fluffy ~ So best I can tell, they're like Steven: working with a case-insensitive filesystem but trying to make Python insist that it's not, and what basic tools there do about case is seemingly random (wc doesn't care, shell expansion does, touch doesn't, rm doesn't (not shown) -- maybe it's just shell expansion that's trying to pretend this is Unix? oh ya, shell expansion and Python import -- *that's* a natural pair 
                              
                              ). > Is it better to have uniform behavior across all platforms (modulo > places like some Windows network fs's where that may not be possible)? I think so, but I've already said that. "import" is a language statement, not a platform file operation at heart. Of *course* people expect open("FiLe") to open files "file" or "FILE" (or even "FiLe" 
                              
                              ) on Windows, but inside Python stmts they expect case to matter. > Should Python's import semantics be identical across all platforms? > OTOH, this is where the rubber meets the road so to speak, so some > incompatibilities may be impossible to avoid. I would prefer it, but if Guido thinks Python's import semantics should derive from the platform's filesystem semantics, fine, and then any "Python import should pretend it's Unix" patch should get tossed without further debate. But Guido doesn't think that either, else Windows Python wouldn't complain about "import FILE" finding file.py first (there is no other tool on Windows that cares at all -- everything else would just open file.py). So I view the current rules as inexplicable: they're neither platform-independent nor consistent with the platform's natural behavior (unless that platform has case-sensitive filesystem semantics). Bottom line: for the purpose of import-from-file (and except for case-destroying filesystems, where PYTHONCASEOK is the only hope), we *can* make case-insensitive case-preserving filesystems "act like" they were case-sensitive with modest effort. We can't do the reverse. That would lead to explainable rules and maximal portability. I'll worry about moving all my Python files into a single directory when it comes up (hasn't yet). > And what about Jython? Oh yeah? What about Vyper 
                              
                              ? otoh-if-i-actually-cared-about-case-i-would-never-have-adopted- this-silly-sig-style-ly y'rs - tim From vladimir.marangozov at optimay.com Sun Feb 4 15:02:32 2001 From: vladimir.marangozov at optimay.com (Vladimir Marangozov) Date: Sun, 4 Feb 2001 15:02:32 +0100 Subject: [Python-Dev] A word from the author (was "pymalloc", was "fun ", was "2.1 slowe r than 2.0") Message-ID: <4C99842BC5F6D411A6A000805FBBB199051F5D@ge0057exch01.micro.lucent.com> [Tim] > > Help us out a little more, briefly. The last time you > mentioned obmalloc on > Python-Dev was: > > Date: Fri, 8 Sep 2000 18:23:13 +0200 (CEST) > Subject: [Python-Dev] 2.0 Optimization & speed > > ... > > The only reason I've postponed my obmalloc patch is that I > > still haven't provided an interface which allows evaluating > > it's impact on the mem size consumption. > > Still a problem in your eyes? Not really. I think obmalloc is a win w.r.t. both space & speed. I was aiming at evaluating precisely how much we win with the help of the profiler, then tune the allocator even more, but this is OS dependant anyway and most people don't dig so deep. I think they don't need to either, but it's our job to have a good understanding of what's going on. In short, you can go for it, opt-in, without fear. Not opt-out, though, because of custom object's thread safety. Thread safety is a problem. Current extensions implement custom object constructors & destructors safely, because they use (at the end of the macro chain, today) the system allocator which is thread safe. Switching to a thread unsafe allocator by default is risky because this may inject bugs in existing working extensions. Although the core objects won't be affected by this change because of the interpreter lock protection, we have no provisions so far for custom object's thread safety. > > I expect we'll have to do the dance of evaluating it with and > without locks regardless See above -- it's not about speed, it's about safety. > BTW, obmalloc has some competition. Hans Boehm popped up on > c.l.py last week, transparently trying to seduce Neil Schemenauer > into devoting his life to making the BDW collector make Python's > refcounting look like a cheap Dutch trick 
                              
                              : > > http://www.deja.com/getdoc.xp?AN=722453837&fmt=text Yes, I saw that. Hans is speaking from experience, but a non-Python one 
                              
                              . I can hardly agree with the idea of dropping RC (which is the best strategy towards expliciteness and fine-grain control of the object's life-cycles) and replacing it with some collector beast (whatever its nature). We'll loose control for unknown benefits. We're already dealing with the major pb of RC (cycle garbage) in an elegant way which is complementary to RC. Saying that we're probably dirtying more cache lines than we should in concurrent scenarios is ... an opinion. My opinion is that this is not really our problem 
                              
                              . If Hans were really right, Microsoft would have already plugged his collector in Windows, instead of using RC. And we all know that MS is unbeatable in providing efficient, specialized implementations for Windows, tuned for the processors Windows in running on 
                              
                              . On a personal note, after 3 months in Munich, I am still intrigued by the many cheap Dutch tricks I see every day on my way, like the latest Mercs, BMWs, Porsches or Audis, to name a few 
                              
                              . can't-impress-them-with-my-Ford-
                              
                              'ly y'rs Vladimir From gvwilson at ca.baltimore.com Sun Feb 4 15:19:47 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Sun, 4 Feb 2001 09:19:47 -0500 Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: <20010204140714.81BBAE8C2@mail.python.org> Message-ID: <000301c08eb5$876baf20$770a0a0a@nevex.com> I've spoken with Barbara Fuller (IPC9 org.); the two openings for a BOF on sets are breakfast or lunch on Wednesday the 7th. I'd prefer breakfast (less chance of me missing my flight :-); is there anyone who's interested in attending who *can't* make that time, but *could* make lunch? And meanwhile: > Ka-Ping Yee: > - the key:value syntax suggested by Guido (i like it quite a lot) Greg Wilson: Did another quick poll; feeling here is that if for key:value in dict: works, then: for index:value in sequence: would also be expected to work. If the keys to the dictionary are (for example) 2-element tuples, then: for (left, right):value in dict: would also be expected to work, just as: for ((left, right), value) in dict.items(): now works. Question: would the current proposal allow NumPy arrays (just as an example) to support both: for index:value in numPyArray: where 'index' would get tuples like '(0, 3, 2)' for a 3D array, *and* for (i, j, k):value in numPyArray: If so, then yeah, it would tidy up a fair bit of my code... Thanks, Greg From thomas at xs4all.net Sun Feb 4 16:10:28 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sun, 4 Feb 2001 16:10:28 +0100 Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: <000301c08eb5$876baf20$770a0a0a@nevex.com>; from gvwilson@ca.baltimore.com on Sun, Feb 04, 2001 at 09:19:47AM -0500 References: <20010204140714.81BBAE8C2@mail.python.org> <000301c08eb5$876baf20$770a0a0a@nevex.com> Message-ID: <20010204161028.D962@xs4all.nl> On Sun, Feb 04, 2001 at 09:19:47AM -0500, Greg Wilson wrote: > If the keys to the dictionary are (for example) 2-element tuples, then: > for (left, right):value in dict: > would also be expected to work, There is no real technical reason for it not to work. From a grammer point of view, for left, right:value in dict: would also work fine. (the grammar would be: 'for' exprlist [':' exprlist] 'in' testlist: and since there can't be a colon inside an exprlist, it's not ambiguous.) The main problem is whether you *want* that to work :) -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From fdrake at acm.org Sun Feb 4 17:26:51 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Sun, 4 Feb 2001 11:26:51 -0500 (EST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14972.36408.800070.656541@beluga.mojam.com> References: <14970.60750.570192.452062@beluga.mojam.com> 
                              
                              <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> <14972.36408.800070.656541@beluga.mojam.com> Message-ID: <14973.33483.956785.985303@cj42289-a.reston1.va.home.com> Skip Montanaro writes: > I thought I answered this question already when Fredrik asked it. In os.py, You did, and I'd have responded then had I been able to spare the time to reply. (I wasn't ignoring the topic.) > to build its __all__ list based upon the myriad different sets of symbols it > might have after it's fancy footwork importing from various os-dependent > modules, I think it's easiest to rely on those modules telling os what it > should export. But since C extensions inherantly control their exports very tightly, perhaps the right approach is to create the __all__ value in the code that needs it -- it usually won't be needed for C extensions, and the os module is a fairly special case anyway. Perhaps this helper would be a good approach: def _get_exports_list(module): try: return list(module.__all__) except AttributeError: return [n for n in dir(module) if n[0] != '_'] The os module could then use: _OS_EXPORTS = ['path', ...] if 'posix' in _names: ... __all__ = _get_exports_list(posix) del posix elif ...: ... _OS_EXPORTS = ['linesep', 
                              
                              ] __all__.extend(_OS_EXPORTS) -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From guido at digicool.com Sun Feb 4 17:55:08 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 11:55:08 -0500 Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: Your message of "Sun, 04 Feb 2001 09:19:47 EST." <000301c08eb5$876baf20$770a0a0a@nevex.com> References: <000301c08eb5$876baf20$770a0a0a@nevex.com> Message-ID: <200102041655.LAA20836@cj20424-a.reston1.va.home.com> > I've spoken with Barbara Fuller (IPC9 org.); the two openings for a > BOF on sets are breakfast or lunch on Wednesday the 7th. I'd prefer > breakfast (less chance of me missing my flight :-); is there anyone > who's interested in attending who *can't* make that time, but *could* > make lunch? Fine with me. > And meanwhile: > > > Ka-Ping Yee: > > - the key:value syntax suggested by Guido (i like it quite a lot) > > Greg Wilson: > Did another quick poll; feeling here is that if > > for key:value in dict: > > works, then: > > for index:value in sequence: > > would also be expected to work. If the keys to the dictionary are (for > example) 2-element tuples, then: > > for (left, right):value in dict: > > would also be expected to work, just as: > > for ((left, right), value) in dict.items(): > > now works. Yes, that's all non-controversial. > Question: would the current proposal allow NumPy arrays (just as an > example) to support both: > > for index:value in numPyArray: > > where 'index' would get tuples like '(0, 3, 2)' for a 3D array, *and* > > for (i, j, k):value in numPyArray: > > If so, then yeah, it would tidy up a fair bit of my code... That's up to the numPy array! Assuming that we introduce this together with iterators, the default NumPy iterator could be made to iterate over all three index sets simultaneously; there could be other iterators to iterate over a selection of index sets (e.g. to iterate over the rows). However the iterator can't be told what form the index has. --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at loewis.home.cs.tu-berlin.de Sun Feb 4 18:43:34 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Sun, 4 Feb 2001 18:43:34 +0100 Subject: [Python-Dev] Re: A word from the author Message-ID: <200102041743.f14HhYE01986@mira.informatik.hu-berlin.de> > Although the core objects won't be affected by this change because > of the interpreter lock protection, we have no provisions so far for > custom object's thread safety. If I understand your concern correctly, you are worried that somebody uses your allocator without holding the interpreter lock. I think it is *extremely* unlikely that a module author will use any Py* function or macro while not holding the lock. I've analyzed a few freely-available extension modules in this respect, and found no occurence of such code. The right way is to document that restriction, both in NEWS and in the C API documentation, and accept the unlikely chance of breaking something. Regards, Martin From esr at thyrsus.com Sun Feb 4 19:20:03 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 13:20:03 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? Message-ID: <20010204132003.A16454@thyrsus.com> Python's .pyc files don't have a magic prefix that the file(1) utility can recognize. Would anyone object if I fixed this? A trivial pair of hacks to the compiler and interpreter would do it. Backward compatibility would be easily arranged. Embedding the Python version number in the prefix might enable some useful behavior down the road. -- 
                              Eric S. Raymond The end move in politics is always to pick up a gun. -- R. Buckminster Fuller From fredrik at pythonware.com Sun Feb 4 20:00:48 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sun, 4 Feb 2001 20:00:48 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: <20010204132003.A16454@thyrsus.com> Message-ID: <009701c08edc$ca78fd50$e46940d5@hagrid> eric wrote: > Python's .pyc files don't have a magic prefix that the file(1) utility > can recognize. Would anyone object if I fixed this? A trivial pair of > hacks to the compiler and interpreter would do it. Backward compatibility > would be easily arranged. > > Embedding the Python version number in the prefix might enable some > useful behavior down the road. Python 1.5.2 (#0, May 9 2000, 14:04:03) >>> import imp >>> imp.get_magic() '\231N\015\012' Python 2.0 (#8, Jan 29 2001, 22:28:01) >>> import imp >>> imp.get_magic() '\207\306\015\012' >>> open("some_module.pyc", "rb").read(4) '\207\306\015\012' Python 2.1a1 (#9, Jan 19 2001, 08:41:32) >>> import imp >>> imp.get_magic() '\xdc\xea\r\n' if you want to change the magic, there are a couple things to consider: 1) the header must consist of imp.get_magic() plus a 4-byte timestamp, followed by a marshalled code object 2) the magic should be four bytes. 3) the magic must be different for different bytecode versions 4) the magic shouldn't survive text/binary conversions on platforms which treat text files and binary files diff- erently. Cheers /F From ping at lfw.org Sun Feb 4 20:34:33 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sun, 4 Feb 2001 11:34:33 -0800 (PST) Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <009701c08edc$ca78fd50$e46940d5@hagrid> Message-ID: 
                              
                              eric wrote: > Python's .pyc files don't have a magic prefix that the file(1) utility > can recognize. Would anyone object if I fixed this? On Sun, 4 Feb 2001, Fredrik Lundh wrote: > Python 1.5.2 (#0, May 9 2000, 14:04:03) > >>> import imp > >>> imp.get_magic() > '\231N\015\012' I don't understand, Eric. Why won't the existing magic number work? I tried the following and it works fine: 0 string \x99N\x0d Python 1.5.2 compiled bytecode data 0 string \x87\xc6\x0d Python 2.0 compiled bytecode data However, when i add \x0a to the end of the bytecode patterns, this stops working: 0 string \x99N\x0d\x0a Python 1.5.2 compiled bytecode data 0 string \x87\xc6\x0d\x0a Python 2.0 compiled bytecode data Do you know what's going on? These all work fine too, by the way: 0 string #!/usr/bin/env\ python Python program text 0 string #!\ /usr/bin/env\ python Python program text 0 string #!/bin/env\ python Python program text 0 string #!\ /bin/env\ python Python program text 0 string #!/usr/bin/python Python program text 0 string #!\ /usr/bin/python Python program text 0 string #!/usr/local/bin/python Python program text 0 string #!\ /usr/local/bin/python Python program text 0 string """ Python module text Unfortunately, many Python modules are mis-recognized as Java source text because they begin with the word "import". Even more unfortunately, this too-general test for "import" seems to be hard-coded into the file(1) command and cannot be changed by editing /usr/share/magic. -- ?!ng "Old code doesn't die -- it just smells that way." -- Bill Frantz From tim.one at home.com Sun Feb 4 21:19:50 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 15:19:50 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <20010204132003.A16454@thyrsus.com> Message-ID: 
                              
                              [Eric S. Raymond] > Python's .pyc files don't have a magic prefix that the file(1) > utility can recognize. Well, they *do* (#define MAGIC in import.c), but it changes from time to time. Here's a NEWS item from 2.1a1: - The interpreter accepts now bytecode files on the command line even if they do not have a .pyc or .pyo extension. On Linux, after executing echo ':pyc:M::\x87\xc6\x0d\x0a::/usr/local/bin/python:' > /proc/sys/fs/binfmt_misc/register any byte code file can be used as an executable (i.e. as an argument to execve(2)). However, the magic number has changed twice since then (in import.c rev 2.157 and again in rev 2.160), so the NEWS item is two changes obsolete. The current magic number can be obtained (as a 4-bytes string) via import imp MAGIC = imp.get_magic() > Would anyone object if I fixed this? Undoubtedly, but not me 
                              
                              . Mucking with the .pyc prefix is always contentious. > A trivial pair of hacks to the compiler and interpreter would > do it. Also need to adjust .py files using imp.get_magic(). Backward compatibility would be easily arranged. Embedding > the Python version number in the prefix might enable some useful > behavior down the road. Note that the current scheme uses a 4-byte value, where the last two bytes are fixed, and the first two are (year-1995)*10000 + (month * 100) + day where month and day are 1-based. What it's recording (unsure this is explained anywhere) is the day on which an incompatible change got made to the PVM. This is important to check so that whatever version of Python you're running doesn't try to execute bytecodes generated for an incompatible PVM. But only Python has a chance of understanding this. Note too that the method used for encoding the date runs out of bits at the end of 2001, so the current scheme is on its last legs regardless. couldn't-be-simpler
                              
                              -ly y'rs - tim From guido at digicool.com Sun Feb 4 22:02:22 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 16:02:22 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: Your message of "Sun, 04 Feb 2001 13:20:03 EST." <20010204132003.A16454@thyrsus.com> References: <20010204132003.A16454@thyrsus.com> Message-ID: <200102042102.QAA23574@cj20424-a.reston1.va.home.com> > Python's .pyc files don't have a magic prefix that the file(1) utility > can recognize. Would anyone object if I fixed this? A trivial pair of > hacks to the compiler and interpreter would do it. Backward compatibility > would be easily arranged. I don't understand. The .pyc file has a magic number. Why is this incompatible with file(1)? > Embedding the Python version number in the prefix might enable some > useful behavior down the road. If we're going to redesign the .pyc file header, I'd propose the following: (1) magic number -- for file(1), never to be changed (2) some kind of version -- Python version, or API version, or bytecode version (3) mtime of .py file (4) options, e.g. is this a .pyc or a .pyo (5) size of marshalled code following (6) marshalled code --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Sun Feb 4 22:21:16 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 16:21:16 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <200102042102.QAA23574@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Guido] > If we're going to redesign the .pyc file header, I'd propose the > following: > > (1) magic number -- for file(1), never to be changed > > (2) some kind of version -- Python version, or API version, or > bytecode version > > (3) mtime of .py file > > (4) options, e.g. is this a .pyc or a .pyo > > (5) size of marshalled code following > > (6) marshalled code Note that the magic number today is different when -U (Py_UnicodeFlag) is specified. That should be migrated to #4. From esr at thyrsus.com Sun Feb 4 23:16:25 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 17:16:25 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: 
                              
                              ; from ping@lfw.org on Sun, Feb 04, 2001 at 11:34:33AM -0800 References: <009701c08edc$ca78fd50$e46940d5@hagrid> 
                              
                              Message-ID: <20010204171625.A17315@thyrsus.com> Ka-Ping Yee 
                              
                              : > I don't understand, Eric. Why won't the existing magic number work? My error. I looked at a couple of .pyc files, but thought the two-byte magic was actual code instead. Turns out the real problem is that Linux file(1) doesn't recognize this prefix. > I tried the following and it works fine: > > 0 string \x99N\x0d Python 1.5.2 compiled bytecode data > 0 string \x87\xc6\x0d Python 2.0 compiled bytecode data This doesn't work when I append it to /etc/magic. I'm investigating. -- 
                              Eric S. Raymond Never trust a man who praises compassion while pointing a gun at you. From esr at thyrsus.com Sun Feb 4 23:24:05 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 17:24:05 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: 
                              
                              ; from tim.one@home.com on Sun, Feb 04, 2001 at 03:19:50PM -0500 References: <20010204132003.A16454@thyrsus.com> 
                              
                              Message-ID: <20010204172405.C17315@thyrsus.com> Tim Peters 
                              
                              : > [Eric S. Raymond] > > Python's .pyc files don't have a magic prefix that the file(1) > > utility can recognize. > > Well, they *do* (#define MAGIC in import.c), but it changes from time to > time. Here's a NEWS item from 2.1a1: > > - The interpreter accepts now bytecode files on the command > line even if they do not have a .pyc or .pyo extension. On > Linux, after executing > > echo ':pyc:M::\x87\xc6\x0d\x0a::/usr/local/bin/python:' > > /proc/sys/fs/binfmt_misc/register > > any byte code file can be used as an executable (i.e. as an > argument to execve(2)). > > However, the magic number has changed twice since then (in import.c rev > 2.157 and again in rev 2.160), so the NEWS item is two changes obsolete. > The current magic number can be obtained (as a 4-bytes string) via > > import imp > MAGIC = imp.get_magic() Interesting. I presume this has to be repeated at every boot? > Note too that the method used for encoding the date runs out of bits at the > end of 2001, so the current scheme is on its last legs regardless. So this has to be fixed anyway. I'm sure we can come up with a better scheme, perhaps one modeled after the PNG header. -- 
                              Eric S. Raymond Are we at last brought to such a humiliating and debasing degradation, that we cannot be trusted with arms for our own defence? Where is the difference between having our arms in our own possession and under our own direction, and having them under the management of Congress? If our defence be the *real* object of having those arms, in whose hands can they be trusted with more propriety, or equal safety to us, as in our own hands? -- Patrick Henry, speech of June 9 1788 From fredrik at effbot.org Sun Feb 4 23:34:07 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Sun, 4 Feb 2001 23:34:07 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: 
                              
                              Message-ID: <011b01c08efa$9705ecd0$e46940d5@hagrid> tim wrote: > > Would anyone object if I fixed this? > > Undoubtedly, but not me 
                              
                              . Mucking with the .pyc prefix is always > contentious. Breaking people's code just for fun seems to be a new trend here. That's bad.  From esr at thyrsus.com Sun Feb 4 23:35:59 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 17:35:59 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <200102042102.QAA23574@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sun, Feb 04, 2001 at 04:02:22PM -0500 References: <20010204132003.A16454@thyrsus.com> <200102042102.QAA23574@cj20424-a.reston1.va.home.com> Message-ID: <20010204173559.D17315@thyrsus.com> Guido van Rossum 
                              
                              : > I don't understand. The .pyc file has a magic number. Why is this > incompatible with file(1)? It isn't. I failed to spot the fact that this is file(1)'s problem, not Python's; my apologies. Nevertheless, according to Tim Peters this is a good time for the issue to come up, because the present method is going to break after year-end. We might as well redesign it now. > If we're going to redesign the .pyc file header, I'd propose the > following: > > (1) magic number -- for file(1), never to be changed > > (2) some kind of version -- Python version, or API version, or > bytecode version > > (3) mtime of .py file > > (4) options, e.g. is this a .pyc or a .pyo > > (5) size of marshalled code following > > (6) marshalled code I agree with these desiderata. Tim has already pointed out that #4 needs to include a Unicode bit. What I'd like to throw in the pot is the cleverest file signature design I've ever seen -- PNG's. Here's a quote from the PNG spec: ---------------------------------------------------------------------------- The first eight bytes of a PNG file always contain the following values: (decimal) 137 80 78 71 13 10 26 10 (hexadecimal) 89 50 4e 47 0d 0a 1a 0a (ASCII C notation) \211 P N G \r \n \032 \n This signature both identifies the file as a PNG file and provides for immediate detection of common file-transfer problems. The first two bytes distinguish PNG files on systems that expect the first two bytes to identify the file type uniquely. The first byte is chosen as a non-ASCII value to reduce the probability that a text file may be misrecognized as a PNG file; also, it catches bad file transfers that clear bit 7. Bytes two through four name the format. The CR-LF sequence catches bad file transfers that alter newline sequences. The control-Z character stops file display under MS-DOS. The final line feed checks for the inverse of the CR-LF translation problem. A decoder may further verify that the next eight bytes contain an IHDR chunk header with the correct chunk length; this will catch bad transfers that drop or alter null (zero) bytes. ---------------------------------------------------------------------------- I think we ought to model Python's fixed magic-number part on this prefix. -- 
                              Eric S. Raymond No matter how one approaches the figures, one is forced to the rather startling conclusion that the use of firearms in crime was very much less when there were no controls of any sort and when anyone, convicted criminal or lunatic, could buy any type of firearm without restriction. Half a century of strict controls on pistols has ended, perversely, with a far greater use of this weapon in crime than ever before. -- Colin Greenwood, in the study "Firearms Control", 1972 From tim.one at home.com Mon Feb 5 00:44:58 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 18:44:58 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <011b01c08efa$9705ecd0$e46940d5@hagrid> Message-ID: 
                              
                              [/F] > Breaking people's code just for fun seems to be a new > trend here. That's bad. The details of the current scheme stop working at the end of the year regardless. Would rather change it rationally than in a last-second panic when the first change is needed after December 31st. If you look at the CVS history of import.c, you'll find that the format-- and size --of .pyc magic has already changed several times over the years. There's always "a reason", and there's another one now. The current scheme was designed when Guido thought 2002 was two years after Python's most likely death 
                              
                              . From greg at cosc.canterbury.ac.nz Mon Feb 5 00:49:33 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Mon, 05 Feb 2001 12:49:33 +1300 (NZDT) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14972.36269.845348.280744@beluga.mojam.com> Message-ID: <200102042349.MAA03822@s454.cosc.canterbury.ac.nz> Skip Montanaro 
                              
                              : > /* initialize module's __all__ list */ > _PyModule_CreateAllList(d); How about constructing __all__ automatically the first time it's referenced if there isn't one already? Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Mon Feb 5 01:07:39 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 19:07:39 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <20010204173559.D17315@thyrsus.com> Message-ID: 
                              
                              [Eric S. Raymond] > ... > What I'd like to throw in the pot is the cleverest file signature > design I've ever seen -- PNG's. Here's a quote from the PNG spec: > > ------------------------------------------------------------------ > The first eight bytes of a PNG file always contain the following > values: > > (decimal) 137 80 78 71 13 10 26 10 > (hexadecimal) 89 50 4e 47 0d 0a 1a 0a > (ASCII C notation) \211 P N G \r \n \032 \n Cool! I vote we take it exactly. I don't even know what PNG is, so it's doubtful my Windows box will be confused by decorating Python files the same way 
                              
                              . > The first two bytes distinguish PNG files on systems that expect > the first two bytes to identify the file type uniquely. > The first byte is chosen as a non-ASCII value to reduce the > probability that a text file may be misrecognized as a PNG file; also, > it catches bad file transfers that clear bit 7. OK, I suggest (decimal) 143 for Python's first byte. That's a "control code" in Latin-1, and (unlike PNG's 137) not even Windows assigns it to a character in their Latin-1 superset (yet). (decimal) 143 80 89 84 13 10 26 10 (hexadecimal) 8f 50 59 54 0d 0a 1a 0a (ASCII C notation) \217 P Y T \r \n \032 \n From fredrik at effbot.org Mon Feb 5 01:12:09 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Mon, 5 Feb 2001 01:12:09 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: 
                              
                              Message-ID: <01ab01c08f08$49f83ed0$e46940d5@hagrid> tim wrote: > [/F] > > Breaking people's code just for fun seems to be a new > > trend here. That's bad. > > The details of the current scheme stop working at the end of the year > regardless. might so be, but it's perfectly possible to change this in a fully backwards compatible way: -- stick to a 4-byte bytecode version magic, but change the algoritm to make it work after 2001. if necessary, use 3 or 4 bytes to hold the "serial number". if the bytecode version is the same as imp.get_magic() and the file isn't damaged, it should be safe to pass it to marshal.load. if marshal returns a code object, it should be safe (relatively speaking) to execute it. -- define the 4-byte timestamp to be an unsigned int, so we can keep going for another 100 years or so. -- introduce a new type code (e.g. 'P') for marshal. this is followed by an extended magic field, followed by the code using today's format (same as for type code 'c'). let the extended magic field contain: -- a python identifier (e.g. "YTHON") -- a newline/eof mangling detector (e.g. "\r\n") -- sys.hexversion (4 bytes) -- a flag field (4 bytes) -- maybe the size of the marshalled block (4 bytes) -- maybe etc Cheers /F From guido at digicool.com Mon Feb 5 01:12:44 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 19:12:44 -0500 Subject: [Python-Dev] import Tkinter fails Message-ID: <200102050012.TAA27410@cj20424-a.reston1.va.home.com> On Unix, either when running from the build directory, or when running the installed binary, "import Tkinter" fails. It seems that Lib/lib-tk is (once again) dropped from the default path. I'm not sure where to point a finger, but I'm kind of hoping that this would be easy for Andrew or Neil to fix... (Also, if this has alrady been addressed here, my apologies. I still have about 500 emails to dig through that arrived during my brief stay in New York...) --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Mon Feb 5 01:34:41 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 19:34:41 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: 
                              
                              ; from tim.one@home.com on Sun, Feb 04, 2001 at 07:07:39PM -0500 References: <20010204173559.D17315@thyrsus.com> 
                              
                              Message-ID: <20010204193441.A19283@thyrsus.com> Tim Peters 
                              
                              : > > The first eight bytes of a PNG file always contain the following > > values: > > > > (decimal) 137 80 78 71 13 10 26 10 > > (hexadecimal) 89 50 4e 47 0d 0a 1a 0a > > (ASCII C notation) \211 P N G \r \n \032 \n > > Cool! I vote we take it exactly. I don't even know what PNG is, so it's > doubtful my Windows box will be confused by decorating Python files the same > way 
                              
                              . > > > The first two bytes distinguish PNG files on systems that expect > > the first two bytes to identify the file type uniquely. > > The first byte is chosen as a non-ASCII value to reduce the > > probability that a text file may be misrecognized as a PNG file; also, > > it catches bad file transfers that clear bit 7. > > OK, I suggest (decimal) 143 for Python's first byte. That's a "control > code" in Latin-1, and (unlike PNG's 137) not even Windows assigns it to a > character in their Latin-1 superset (yet). > > (decimal) 143 80 89 84 13 10 26 10 > (hexadecimal) 8f 50 59 54 0d 0a 1a 0a > (ASCII C notation) \217 P Y T \r \n \032 \n \217 is good. It doesn't occur in /usr/share/magic at all, which is a good sign. Why just PYT, though? Why not spell out "Python"? That would let us detect case-smashing, too. -- 
                              Eric S. Raymond False is the idea of utility that sacrifices a thousand real advantages for one imaginary or trifling inconvenience; that would take fire from men because it burns, and water because one may drown in it; that has no remedy for evils except destruction. The laws that forbid the carrying of arms are laws of such a nature. They disarm only those who are neither inclined nor determined to commit crimes. -- Cesare Beccaria, as quoted by Thomas Jefferson's Commonplace book From tim.one at home.com Mon Feb 5 02:52:31 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 20:52:31 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <20010204193441.A19283@thyrsus.com> Message-ID: 
                              
                              [Eric S. Raymond] > \217 is good. It doesn't occur in /usr/share/magic at all, which > is a good sign. See? You should have more Windows geeks helping out with Linux: none of our ideas have anything in common with yours 
                              
                              . > Why just PYT, though? Why not spell out "Python"? Just because 8 seemed geekier than 11. Natural alignment for a struct, etc. > That would let us detect case-smashing, too. Hmm. "Pyt" would too! If you're going to PEP (or virtual PEP) this, I won't raise a stink either way. From ping at lfw.org Mon Feb 5 03:21:40 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sun, 4 Feb 2001 18:21:40 -0800 (PST) Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Sun, 4 Feb 2001, Tim Peters wrote: > OK, I suggest (decimal) 143 for Python's first byte. That's a "control > code" in Latin-1, and (unlike PNG's 137) not even Windows assigns it to a > character in their Latin-1 superset (yet). > > (decimal) 143 80 89 84 13 10 26 10 > (hexadecimal) 8f 50 59 54 0d 0a 1a 0a > (ASCII C notation) \217 P Y T \r \n \032 \n Pyt? What's a "pyt"? How about something we can all recognize: (decimal) 143 83 112 97 109 10 13 10 (hexadecimal) 8f 53 70 61 6d 0a 0d 0a (ASCII C notation) \217 S p a m \n \r \n ...to be followed by: date of last incompatible VM change (4 bytes: year, year, month, day) Python version, as in sys.hexversion (4 bytes) mtime of source .py file (4 bytes) reserved for option flags and future expansion (8 bytes) size of marshalled code data (4 bytes) marshalled code That's a nice, geeky 32 bytes of header info. (The "Spam" part is not so serious; the rest is serious. But i do think "Spam" is more fun that "Pyt"! :) And the Ctrl-Z char is pointless; no other binary format does this or needs it.) Hmm. Questions: - Should we include the path to the original .py file? (so Python can automatically recompile an out-of-date file) - How about the name of the module? (so that renaming the file doesn't kill it; possible answer to the case-sensitivity issue?) - If the purpose of the code-size field is to protect against incomplete file transfers, would a hash be worth considering here? -- ?!ng "Old code doesn't die -- it just smells that way." -- Bill Frantz From ping at lfw.org Mon Feb 5 03:34:29 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sun, 4 Feb 2001 18:34:29 -0800 (PST) Subject: [Python-Dev] Suggested .pyc header format In-Reply-To: 
                              
                              Message-ID: 
                              
                              Here's a quick revision, to fix some alignment boundaries. I think this ordering might make more sense. bytes contents 0-7 magic string '\x8fSpam\n\r\n' 8-11 Python version (sys.hexversion) 12-15 date of last incompatible VM change (YYMD, year msb-first) 16-23 reserved (flags, etc.) 24-27 mtime of source .py file (long int, msb-first) 28-31 size of marshalled code (long int, msb-first) 32- marshalled code In a dump, this would look like: ---------magic--------- --version-- --VM-date-- 8f 53 70 61 6d 0a 0d 0a 02 01 00 a2 07 d1 02 04 .Spam......".Q.. 00 00 00 00 00 00 00 00 3a 7d ae ba 00 00 73 a8 ........:}.:..s( ---------flags--------- ---mtime--- ---size---- -- ?!ng "Old code doesn't die -- it just smells that way." -- Bill Frantz From tim.one at home.com Mon Feb 5 04:41:42 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 22:41:42 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Ka-Ping Yee, with more magical ideas] This is contentious every time it comes up because of "backward compatibility". The contentious part is that no two people come into it with the same idea of what "backward compatible" means, exactly, and it usually drags on for days until people realize that. In the meantime, everyone thinks everyone else is an idiot 
                              
                              . So far as the docs go, imp.get_magic() returns "a string", and that's all it says. By that defn, it would be darned hard to think of any scheme that isn't backward compatible. OTOH, PyImport_GetMagicNumber() returns "a long", so there's good reason to preserve that version-checking must not rely on more than 4 bytes of info. Then you have /F's post, which purports to give a "fully backward compatible" scheme, despite changing what probably appears 
                              
                              to be almost everyting. It takes a long time to reverse-engineer what the crucial invariants are for each person, based on what they propose and what they complain about in other proposals. I don't have time for that now, so will leave it to someone else. It would help if people could spell out directly which invariants they do and don't care about (e.g., you can *infer* that /F cares about exactly 4 bytes magic number (but doesn't care about content) then exactly 4 bytes file timestamp then a blob that marshal believes is a single object then that's it but doesn't care that, e.g., checking the 4-byte magic number alone is sufficent to catch binary files opened in text mode (but somebody else will care about that!)). Since virtually none of this has been formalized via an API, virtually all code outside the distribution that deals with this stuff is cheating. Small wonder it's contentious ... From esr at thyrsus.com Mon Feb 5 04:55:20 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 22:55:20 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: 
                              
                              ; from ping@lfw.org on Sun, Feb 04, 2001 at 06:21:40PM -0800 References: 
                              
                              
                              Message-ID: <20010204225520.A20513@thyrsus.com> Ka-Ping Yee 
                              
                              : > And the Ctrl-Z char > is pointless; no other binary format does this or needs it.) I've actually seen circumstances under which this is useful. Besides, you want a character separating the \n from the \r\n, otherwise ghod knows what interactions you'll get from some of the cockamamie line-terminator translation schemes out there. Might as well be Ctl-Z as anything else. I'll leave the other issues to people with more experience and investment in them. -- 
                              Eric S. Raymond When only cops have guns, it's called a "police state". -- Claire Wolfe, "101 Things To Do Until The Revolution" From guido at digicool.com Mon Feb 5 05:10:20 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 23:10:20 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: Your message of "Sun, 04 Feb 2001 22:41:42 EST." 
                              
                              References: 
                              
                              Message-ID: <200102050410.XAA28600@cj20424-a.reston1.va.home.com> > exactly 4 bytes magic number (but doesn't care about content) > then > exactly 4 bytes file timestamp > then > a blob that marshal believes is a single object > then > that's it That's also what I would call b/w compatible here. It's the obvious baseline. (With the addition that the timestamp uses little-endian byte order -- like marshal.) > but doesn't care that, e.g., checking the 4-byte magic number alone is > sufficent to catch binary files opened in text mode (but somebody else will > care about that!)). Hm, that's not the reason the magic number ends in \r\n. The reason was that on the Mac, long ago, the MPW compiler actually swapped the meaning of \r and \n! So that '\r' in C meant '\012' and '\n' meant '\015'. This was intended to make C programs that were parsing text files looking for \n work on Mac text files which use \r. (Why does the Mac use \r? AFAICT, for the same reason that DOS chose \ instead of / -- to be different from Unix, possibly to avoid patent infringement. Silly.) Later compilers on the Mac weren't so stupid, and now the fact that this lets you discover text translation errors is just a pleasant side-effect. Personally, I don't care about this property any more. > Since virtually none of this has been formalized via an API, virtually all > code outside the distribution that deals with this stuff is cheating. Small > wonder it's contentious ... The thing is, it's very useful to have tools ones that manipulate .pyc files, and while it's not officially documented or standardized, the presence of the C API to get the magic number at least suggests that the file format can change the magic number but not otherwise. The py_compile.py standard library module acts as de-facto documentation. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 5 05:28:30 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 23:28:30 -0500 Subject: [Python-Dev] Waiting method for file objects In-Reply-To: Your message of "Thu, 25 Jan 2001 11:19:36 EST." <20010125111936.A23512@thyrsus.com> References: <20010125111936.A23512@thyrsus.com> Message-ID: <200102050428.XAA28690@cj20424-a.reston1.va.home.com> > I have been researching the question of how to ask a file descriptor how much > data it has waiting for the next sequential read, with a view to discovering > what cross-platform behavior we could count on for a hypothetical `waiting' > method in Python's built-in file class. I have a strong -1 on this. It violates the abstraction of Python file objects as a thin layer on top of C's stdio. I don't want to add any new features that can only be implemented by digging under the hood of stdio. There is no standard way to figure out how much data is buffered inside the FILE struct, so doing any kind of system call on the file descriptor is insufficient unless the file is opened in unbuffered mode -- not an attractive option in most applications. Apart from the stdio buffering issue, apps that really want to do this can already look under the hood, thereby clearly indicating that they make more assumptions about the platform than portable Python. For static files, an app can call os.fstat() itself. But I think it's a weakness of the app if it needs to resort to this -- Eric's example that motivated this desire in him didn't convince me at all. For sockets, and on Unix for pipes and FIFOs, an app can use the select module to find out whether data can be read right away. It doesn't tell how much data, but that's unnecessary -- at least for sockets (where this is a very common request), the recv() call will return short data rather than block for more if at least one byte can be read. (For pipes and FIFOs, you can use fstat() or FIONREAD if you really want -- but why bother?) --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 5 05:41:20 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 23:41:20 -0500 Subject: [Python-Dev] Re: 2.1a2 release issues; mail.python.org still down In-Reply-To: Your message of "Thu, 01 Feb 2001 19:15:24 +0100." <3A79A7BC.58997544@lemburg.com> References: <14969.31398.706875.540775@w221.z064000254.bwi-md.dsl.cnc.net> <3A798F14.D389A4A9@lemburg.com> <14969.38945.771075.55993@cj42289-a.reston1.va.home.com> <3A79A058.772239C2@lemburg.com> <14969.41344.176815.821673@cj42289-a.reston1.va.home.com> <3A79A7BC.58997544@lemburg.com> Message-ID: <200102050441.XAA28783@cj20424-a.reston1.va.home.com> > The warnings are at least as annoying as recompiling the > extensions, even more since each and every imported extension > will moan about the version difference ;-) Hey, here's a suggestion for a solution then: change the warning-issuing code to use the new PyErr_Warn() function! Patch gratefully accepted on SourceForge. Now, note that using "python -Werror" the user can cause these warnings to be turned into errors, and since few modules test for error returns from Py_InitModule(), this will likely cause core dumps. However, note that there are other reasons why Py_InitModule() can return NULL, so it really behooves us to test for an error return anyway! --Guido van Rossum (home page: http://www.python.org/~guido/) From skip at mojam.com Mon Feb 5 05:43:01 2001 From: skip at mojam.com (Skip Montanaro) Date: Sun, 4 Feb 2001 22:43:01 -0600 (CST) Subject: [Python-Dev] import Tkinter fails In-Reply-To: <200102050012.TAA27410@cj20424-a.reston1.va.home.com> References: <200102050012.TAA27410@cj20424-a.reston1.va.home.com> Message-ID: <14974.12117.848610.822769@beluga.mojam.com> Guido> I still have about 500 emails to dig through that arrived during Guido> my brief stay in New York... Haven't you learned yet? 
                              
                              Skip From guido at digicool.com Mon Feb 5 05:47:26 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 23:47:26 -0500 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: Your message of "Fri, 02 Feb 2001 11:45:02 +1300." <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> Message-ID: <200102050447.XAA28915@cj20424-a.reston1.va.home.com> > > The old type/class split: list is a type, and types spell their "method > > tables" in ways that have little in common with how classes do it. > > Maybe as a first step towards type/class unification one > day, we could add __xxx__ attributes to all the builtin > types, and start to think of the method table as the > definitive source of all methods, with the tp_xxx slots > being a sort of cache for the most commonly used ones. Yes, I've often thought that we should be able to heal the split for 95% by using a few well-aimed tricks like this. Later... --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Mon Feb 5 05:58:28 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 23:58:28 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <200102050410.XAA28600@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Guido] > Hm, that's not the reason the magic number ends in \r\n. > ... [Mac silliness, for a change] ... > Later compilers on the Mac weren't so stupid, and now the fact that > this lets you discover text translation errors is just a pleasant > side-effect. > > Personally, I don't care about this property any more. Don't know about Macs (although I believe the Metrowerks libc can be still be *configured* to swap \r and \n there), but it caught a bug in Python in the 2.0 release cycle (where Python was opening .pyc files in text mode by mistake, but only on Windows). Well, actually, it didn't catch anything, it caused import from .pyc to fail silently. Having *some* specific gross thing fail every time is worth something. But the \r\n thingie can be pushed into the extended header instead. Here's an idea for "the new" magic number, assuming it must remain 4 bytes: byte 0: \217 will never change byte 1: 'P' will never change byte 2: high-order byte of version number byte 3: low-order byte of version number "Version number" is an unsigned 16-bit int, starting at 0 and incremented by 1 from time to time. 64K changes may even be enough to get us to Python 3000 
                              
                              . A separate text file should record the history of version number changes, associating each with the date, release and reason for change (the CVS log for import.c used to be good about recording the reason, but not anymore). Then we can keep a 4-byte magic number, Eric can have his invariant two-byte tag at the start, and it's still possible to compare "version numbers" easily for more than just equality (read the magic number as a "network standard" unsigned int, and it's a total ordering with earlier versions comparing less than later ones). The other nifty PNG sanity-checking tricks can also move into the extended header. all-obvious-to-the-most-casual-observer-ly y'rs - tim From guido at digicool.com Mon Feb 5 06:04:56 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 00:04:56 -0500 Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: Your message of "Sat, 03 Feb 2001 17:03:20 CST." <14972.36408.800070.656541@beluga.mojam.com> References: <14970.60750.570192.452062@beluga.mojam.com> 
                              
                              <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> <14972.36408.800070.656541@beluga.mojam.com> Message-ID: <200102050504.AAA29344@cj20424-a.reston1.va.home.com> > Fred> I don't think adding __all__ to C modules makes sense. If you > Fred> want the equivalent for a module that doesn't have an __all__, you > Fred> can compute it easily enough. Adding it when it isn't useful is a > Fred> maintenance problem that can be avoided easily enough. > > I thought I answered this question already when Fredrik asked it. In os.py, > to build its __all__ list based upon the myriad different sets of symbols it > might have after it's fancy footwork importing from various os-dependent > modules, I think it's easiest to rely on those modules telling os what it > should export. So use dir(), or dir(posix), to find out what you've got. I'm strongly -1 to adding __all__ to extensions. Typically, *all* symbols exported by an extension are to be imported. We should never rely on __all__ existing -- we should just test for its existence and have a fallback, just like from...import * does. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Mon Feb 5 06:12:44 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 00:12:44 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Ping] > - If the purpose of the code-size field is to protect against > incomplete file transfers, would a hash be worth > considering here? I think it's more to make it easy to suck the code into a string in one gulp. Else the code-size field would have come after the code <0.9 wink>. From fredrik at effbot.org Mon Feb 5 07:35:02 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Mon, 5 Feb 2001 07:35:02 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: 
                              
                              Message-ID: <009f01c08f3d$c7034070$e46940d5@hagrid> tim wrote: > Then you have /F's post, which purports to give a "fully backward > compatible" scheme, despite changing what probably appears 
                              
                              to be > almost everyting. unlike earlier proposals, it doesn't break py_compile: MAGIC = imp.get_magic() fc = open(cfile, 'wb') fc.write('\0\0\0\0') wr_long(fc, timestamp) marshal.dump(codeobject, fc) fc.flush() fc.seek(0, 0) fc.write(MAGIC) fc.close() and it doesn't break imputil: f = open(file, 'rb') if f.read(4) == imp.get_magic(): t = struct.unpack('
                              
                              Message-ID: 
                              
                              [/F] > unlike earlier proposals, it doesn't break py_compile: > ... > and it doesn't break imputil: > ... I don't care about those, not because they're unimportant, but because they're in the distribution so we're responsible for shipping versions that work. They're "inside the box", where nothing is cheating. > and it doesn't break any user code that does similar things > (squeeze, pythonworks, and a dozen other tools I've written; > applications using local copies of imputils, etc) *Those* I care about. But it's impossible to know all the assumptions they make, given that almost nothing is guaranteed by the docs (the only meaningful definition I can think of for your "similar" is "other code that won't break"!). For all I know, ActivePython will die unless they can divide the magic number by 10000 then add 1995 to get the year <0.7 wink/0.3 frown>. Anyway, I'm on board with that, and already proposed a new 4-byte "magic number" format that should leave you and Eric happy. Me too. Probably not Guido. Barry is ignoring this. Jeremy wishes he had the time. Fred hopes we don't change the docs. Eric just wants to see progress. Ping is thinking of new syntax for a .pyc iterator 
                              
                              . From pf at artcom-gmbh.de Mon Feb 5 11:30:20 2001 From: pf at artcom-gmbh.de (Peter Funk) Date: Mon, 5 Feb 2001 11:30:20 +0100 (MET) Subject: "backward compatibility" defined (was Re: [Python-Dev] Identifying magic prefix on Python files?) In-Reply-To: 
                              
                              from Tim Peters at "Feb 4, 2001 10:41:42 pm" Message-ID: 
                              
                              Hi, Tim Peters wrote: > This is contentious every time it comes up because of "backward > compatibility". The contentious part is that no two people come into it > with the same idea of what "backward compatible" means, exactly, and it > usually drags on for days until people realize that. In the meantime, > everyone thinks everyone else is an idiot 
                              
                              . Thinking as a commercial software vendor: "Backward compatibility" means to me, that I can choose a stable version of Python (say 1.5.2, since this is what comes with the Linux Distros SuSE 6.2, 6.3, 6.4 and 7.0 or RedHat 6.2, 7.0 is still in use on 98% of our customer machines), generate .pyc-Files with this and than future stable versions of Python will be able to import and run these files, if I payed proper attention to possible incompatibilities like for example '[].append((one, two))'. Otherwise the vendor company has to fall back to one of the following "solutions": 1. provide a bunch of different versions of bytecode-Archives for each version of Python (a nightmare). or 2. has to distribute the Python sources of its application (which is impossible due to the companies policy) or 3. has to distribute an own version of Python (which is a similar nightmare due to incompatible shared library versions (Tcl/Tk 8.0.5, 8.1, ... 8.3) and the risk to break other Python and Tcl/Tk apps installed by the Linux Distro). or 4. has to port the stuff to another language platform (say Java?) not suffering from such binary incompatibility problems. (do u believe this?) So in the closed-source-world bytecode compatibility is a major issue. Regards, Peter -- Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260 office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen) From mal at lemburg.com Mon Feb 5 11:47:47 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 11:47:47 +0100 Subject: [Python-Dev] insertdict slower? References: 
                              
                              Message-ID: <3A7E84D3.4D111F0F@lemburg.com> Tim Peters wrote: > > [MAL] > > Looks like Jeremy's machine has a problem or this is the result > > of different compiler optimizations. > > Are you using an AMD chip? They have different cache behavior than the > Pentium I expect Jeremy is using. Different flavors of Pentium also have > different cache behavior. If the slowdown his box reports in insertdict is > real (which I don't know), cache effects are the most likely cause (given > that the code has not changed at all). Yes, I ran the tests on an AMK K6 233. Don't know about the internal cache size or their specific cache strategy, but since much of today's performance is achieved via cache strategies, this would be a possible explanation. > > On my machine using the same compiler and optimization settings > > I get the following figure for DictCreation (2.1a1 vs. 2.0): > > > > DictCreation: 1869.35 ms 12.46 us +8.77% > > > > That's below noise level (+/-10%). > > Jeremy saw "about 15%". So maybe that's just *loud* noise 
                              
                              . > > noise-should-be-measured-in-decibels-ly y'rs - tim Hmm, that would introduce a logarithmic scale to these benchmarks ... perhaps not a bad idea :-) BTW, I've added a special test for string key and float keys to the benchmark. The results are surprising: string keys are 100% faster than float keys. Part of this is certainly due to the string key optimizations. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Mon Feb 5 12:01:50 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 12:01:50 +0100 Subject: [Python-Dev] Adding opt-in pymalloc + alpha3 References: <4C99842BC5F6D411A6A000805FBBB199051F5D@ge0057exch01.micro.lucent.com> Message-ID: <3A7E881E.64D64F08@lemburg.com> Vladimir Marangozov wrote: > > [Tim] > > > > Help us out a little more, briefly. The last time you > > mentioned obmalloc on > > Python-Dev was: > > > > Date: Fri, 8 Sep 2000 18:23:13 +0200 (CEST) > > Subject: [Python-Dev] 2.0 Optimization & speed > > > ... > > > The only reason I've postponed my obmalloc patch is that I > > > still haven't provided an interface which allows evaluating > > > it's impact on the mem size consumption. > > > > Still a problem in your eyes? > > Not really. I think obmalloc is a win w.r.t. both space & speed. > I was aiming at evaluating precisely how much we win with the help > of the profiler, then tune the allocator even more, but this is > OS dependant anyway and most people don't dig so deep. I think > they don't need to either, but it's our job to have a good > understanding of what's going on. > > In short, you can go for it, opt-in, without fear. > > Not opt-out, though, because of custom object's thread safety. > > Thread safety is a problem. Current extensions implement custom > object constructors & destructors safely, because they use (at the > end of the macro chain, today) the system allocator which is > thread safe. Switching to a thread unsafe allocator by default is > risky because this may inject bugs in existing working extensions. > Although the core objects won't be affected by this change because > of the interpreter lock protection, we have no provisions so far > for custom object's thread safety. Ok, everyone seems to agree that adding pymalloc to Python on an opt-in basis is a Good Thing, so let's do it ! Even though I don't think that adding opt-in code matters much w/r to stability of the rest of the code, I still think that we ought to insert a third alpha release to hammer a bit more on weak refs and nested scopes. These two additions are major new features in Python 2.1 which were added very late in the release cycle and haven't had much testing in the field. Thoughts ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Mon Feb 5 12:08:41 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 12:08:41 +0100 Subject: [Python-Dev] re: Sets BOF / for in dict References: <000301c08eb5$876baf20$770a0a0a@nevex.com> Message-ID: <3A7E89B9.B90D36DF@lemburg.com> Greg Wilson wrote: > > I've spoken with Barbara Fuller (IPC9 org.); the two openings for a > BOF on sets are breakfast or lunch on Wednesday the 7th. I'd prefer > breakfast (less chance of me missing my flight :-); is there anyone > who's interested in attending who *can't* make that time, but *could* > make lunch? Depends on the time frame of "breakfast" ;-) > And meanwhile: > > > Ka-Ping Yee: > > - the key:value syntax suggested by Guido (i like it quite a lot) > > Greg Wilson: > Did another quick poll; feeling here is that if > > for key:value in dict: > > works, then: > > for index:value in sequence: > > would also be expected to work. If the keys to the dictionary are (for > example) 2-element tuples, then: > > for (left, right):value in dict: > > would also be expected to work, just as: > > for ((left, right), value) in dict.items(): > > now works. > > Question: would the current proposal allow NumPy arrays (just as an > example) to support both: > > for index:value in numPyArray: > > where 'index' would get tuples like '(0, 3, 2)' for a 3D array, *and* > > for (i, j, k):value in numPyArray: > > If so, then yeah, it would tidy up a fair bit of my code... Two things: 1. the proposed syntax key:value does away with the easy to parse Python block statement syntax 2. why can't we use the old 'for x,y,z in something:' syntax and instead add iterators to the objects in question ? for key, value in object.iterator(): ... this doesn't only look better, it also allows having different iterators for different tasks (e.g. to iterate over values, key, items, row in a matrix, etc.) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Mon Feb 5 12:15:03 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 12:15:03 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: <20010204132003.A16454@thyrsus.com> <009701c08edc$ca78fd50$e46940d5@hagrid> Message-ID: <3A7E8B37.E855DF81@lemburg.com> Fredrik Lundh wrote: > > eric wrote: > > > Python's .pyc files don't have a magic prefix that the file(1) utility > > can recognize. Would anyone object if I fixed this? A trivial pair of > > hacks to the compiler and interpreter would do it. Backward compatibility > > would be easily arranged. > > > > Embedding the Python version number in the prefix might enable some > > useful behavior down the road. > > Python 1.5.2 (#0, May 9 2000, 14:04:03) > >>> import imp > >>> imp.get_magic() > '\231N\015\012' > > Python 2.0 (#8, Jan 29 2001, 22:28:01) > >>> import imp > >>> imp.get_magic() > '\207\306\015\012' > >>> open("some_module.pyc", "rb").read(4) > '\207\306\015\012' > > Python 2.1a1 (#9, Jan 19 2001, 08:41:32) > >>> import imp > >>> imp.get_magic() > '\xdc\xea\r\n' > > if you want to change the magic, there are a couple > things to consider: > > 1) the header must consist of imp.get_magic() plus > a 4-byte timestamp, followed by a marshalled code > object > > 2) the magic should be four bytes. > > 3) the magic must be different for different bytecode > versions > > 4) the magic shouldn't survive text/binary conversions > on platforms which treat text files and binary files diff- > erently. Side note: the magic can also change due to command line options being used, e.g. -U will bump the magic number by 1. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From skip at mojam.com Mon Feb 5 13:34:14 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 06:34:14 -0600 (CST) Subject: [Python-Dev] ANNOUNCE: Python for AS/400. (fwd) Message-ID: <14974.40390.663230.906178@beluga.mojam.com> FYI. Note that the author's web page for the project identifies some ASCII/EBCDIC issues. Don't know if that would be of interest to this group or not... Skip -------------- next part -------------- An embedded message was scrubbed... From: Per Gummedal 
                              
                              Subject: ANNOUNCE: Python for AS/400. Date: Mon, 5 Feb 2001 09:01:00 +0100 Size: 1206 URL: 
                              
                              From tismer at tismer.com Mon Feb 5 15:13:18 2001 From: tismer at tismer.com (Christian Tismer) Date: Mon, 05 Feb 2001 15:13:18 +0100 Subject: [Python-Dev] The 2nd Korea Python Users Seminar References: <200101311626.LAA01799@cj20424-a.reston1.va.home.com> Message-ID: <3A7EB4FE.2791A6D1@tismer.com> Guido van Rossum wrote: > > Wow...! > > Way to go, Christian! I did so. Now I'm back, and I have to say it was phantastic. People in Korea are very nice, and the Python User Group consists of very enthusiastic Pythoneers. There were over 700 participants for the seminar, and they didn't have enough chairs for everybody. Changjune did a very well-done presentation for beginners. I was merged into it for special details, future plans, and the Q&A part. It was a lesson for me, to see how to present difficult stuff. Korea is a very prolific ground for Python. Only few outside of Korea know about this. I suggested to open up the group for non-local actions, and they are planning to add an international HTML tree to their website. Professor Lee just got the first print of "Learning Python" which he translated into Korean. We promised each other to exchange our translation. And so on, lots of new friendships. I will come back in autumn for the next seminar. Today I started a Hangul course, after Chanjune tought be the principles of the phonetic syllables. Nice language! ciao - chris.or.kr -- Christian Tismer :^) 
                              
                              Mission Impossible 5oftware : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF where do you want to jump today? http://www.stackless.com From alex_c at MIT.EDU Mon Feb 5 15:30:33 2001 From: alex_c at MIT.EDU (Alex Coventry) Date: Mon, 5 Feb 2001 09:30:33 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? Message-ID: <200102051430.JAA17890@w20-575-36.mit.edu> Hi. I've found it convenient to use the function below to make system calls, as I sometimes the strings I need to pass as arguments confuse the shell used in os.system. I was wondering whether it's worth passing this upstream. The main problem with doing so is that I have no idea how to implement it on Windows, as I can't use the os.fork and os.wait* functions in that context. Alex. import os def system(command, args, environ=os.environ): '''The 'args' variable is a sequence of strings that are to be passed as the arguments to the command 'command'.''' # Fork off a process to be replaced by the command to be executed # when 'execve' is run. pid = os.fork() if pid == 0: # This is the child process; replace it. os.execvpe(command, [command,] + args, environ) # In the parent process; wait for the child process to finish. return_pid, return_value = os.waitpid(pid, 0) assert return_pid == pid return return_value if __name__ == '__main__': print system('/bin/cat', ['/etc/hosts.allow', '/etc/passwd']) From guido at digicool.com Mon Feb 5 15:34:51 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 09:34:51 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? In-Reply-To: Your message of "Mon, 05 Feb 2001 09:30:33 EST." <200102051430.JAA17890@w20-575-36.mit.edu> References: <200102051430.JAA17890@w20-575-36.mit.edu> Message-ID: <200102051434.JAA31491@cj20424-a.reston1.va.home.com> > Hi. I've found it convenient to use the function below to make system > calls, as I sometimes the strings I need to pass as arguments confuse > the shell used in os.system. I was wondering whether it's worth passing > this upstream. The main problem with doing so is that I have no idea > how to implement it on Windows, as I can't use the os.fork and os.wait* > functions in that context. > > Alex. Hi Alex, This functionality is alrady available through the os.spawn*() family of functions. This is supported on Unix and Windows. BTW, what do you mean by "upstream"? --Guido van Rossum (home page: http://www.python.org/~guido/) > import os > > def system(command, args, environ=os.environ): > > '''The 'args' variable is a sequence of strings that are to be > passed as the arguments to the command 'command'.''' > > # Fork off a process to be replaced by the command to be executed > # when 'execve' is run. > pid = os.fork() > if pid == 0: > > # This is the child process; replace it. > os.execvpe(command, [command,] + args, environ) > > # In the parent process; wait for the child process to finish. > return_pid, return_value = os.waitpid(pid, 0) > assert return_pid == pid > return return_value > > if __name__ == '__main__': > > print system('/bin/cat', ['/etc/hosts.allow', '/etc/passwd']) From fredrik at pythonware.com Mon Feb 5 15:42:51 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Mon, 5 Feb 2001 15:42:51 +0100 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? References: <200102051430.JAA17890@w20-575-36.mit.edu> <200102051434.JAA31491@cj20424-a.reston1.va.home.com> Message-ID: <01d001c08f81$ec4d83b0$0900a8c0@SPIFF> guido wrote: > BTW, what do you mean by "upstream"? looks like freebsd lingo: the original maintainer of a piece of software (outside the bsd universe). Cheers /F From mwh21 at cam.ac.uk Mon Feb 5 15:54:30 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 05 Feb 2001 14:54:30 +0000 Subject: [Python-Dev] Re: "backward compatibility" defined In-Reply-To: pf@artcom-gmbh.de's message of "Mon, 5 Feb 2001 11:30:20 +0100 (MET)" References: 
                              
                              Message-ID: 
                              
                              pf at artcom-gmbh.de (Peter Funk) writes: > Hi, > > Tim Peters wrote: > > This is contentious every time it comes up because of "backward > > compatibility". The contentious part is that no two people come into it > > with the same idea of what "backward compatible" means, exactly, and it > > usually drags on for days until people realize that. In the meantime, > > everyone thinks everyone else is an idiot 
                              
                              . > > Thinking as a commercial software vendor: "Backward compatibility" > means to me, that I can choose a stable version of Python (say 1.5.2, > since this is what comes with the Linux Distros SuSE 6.2, 6.3, 6.4 > and 7.0 or RedHat 6.2, 7.0 is still in use on 98% of our customer > machines), generate .pyc-Files with this and than future stable > versions of Python will be able to import and run these files, if I > payed proper attention to possible incompatibilities like for > example '[].append((one, two))'. Really? This isn't the case today, is it? The demise of UNPACK_LIST/UNPACK_TUPLE springs to mind. Changes in IMPORT_* opcodes/code-generation probably bite too. I can certainly remember occasions in the past few months where I'be updated from CVS, rebuilt and forgotten to blow the .pyc files away and got core dumps as a result. > Otherwise the vendor company has to fall back to one of the following > "solutions": > 1. provide a bunch of different versions of bytecode-Archives for each > version of Python (a nightmare). Oh, hardly. I can see that making sure that people get the right versions might be a drag, but not a severe one. You could always distribute *all* the relavent bytecodes - they're not that big. > or 2. has to distribute the Python sources of its application (which is > impossible due to the companies policy) decompyle? This isn't going to protect you against anyone with a modicum of determination. > or 3. has to distribute an own version of Python (which is a similar > nightmare due to incompatible shared library versions (Tcl/Tk > 8.0.5, 8.1, ... 8.3) and the risk to break other Python and > Tcl/Tk apps installed by the Linux Distro). I don't believe this can be unsurmountable. Build a static executable. > So in the closed-source-world bytecode compatibility is a major issue. Well, they seem to cope without it at the moment... Cheers, M. -- The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offence. -- Edsger W. Dijkstra, SIGPLAN Notices, Volume 17, Number 5 From alex_c at MIT.EDU Mon Feb 5 15:57:03 2001 From: alex_c at MIT.EDU (Alex Coventry) Date: Mon, 5 Feb 2001 09:57:03 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? In-Reply-To: <200102051434.JAA31491@cj20424-a.reston1.va.home.com> (message from Guido van Rossum on Mon, 05 Feb 2001 09:34:51 -0500) References: <200102051430.JAA17890@w20-575-36.mit.edu> <200102051434.JAA31491@cj20424-a.reston1.va.home.com> Message-ID: <200102051457.JAA17949@w20-575-36.mit.edu> > This functionality is alrady available through the os.spawn*() family > of functions. This is supported on Unix and Windows. Hi, Guido. The only problem with os.spawn* is that it forks off a new process, and I don't know how to wait for the new process to finish. > BTW, what do you mean by "upstream"? I thought it might be a useful thing to include in the python distribution. Alex. From guido at digicool.com Mon Feb 5 15:55:51 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 09:55:51 -0500 Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: Your message of "Mon, 05 Feb 2001 12:08:41 +0100." <3A7E89B9.B90D36DF@lemburg.com> References: <000301c08eb5$876baf20$770a0a0a@nevex.com> <3A7E89B9.B90D36DF@lemburg.com> Message-ID: <200102051455.JAA31737@cj20424-a.reston1.va.home.com> > Greg Wilson wrote: > > > > I've spoken with Barbara Fuller (IPC9 org.); the two openings for a > > BOF on sets are breakfast or lunch on Wednesday the 7th. I'd prefer > > breakfast (less chance of me missing my flight :-); is there anyone > > who's interested in attending who *can't* make that time, but *could* > > make lunch? [MAL] > Depends on the time frame of "breakfast" ;-) Does this mean you'll be at the conference? That would be excellent! > Two things: > > 1. the proposed syntax key:value does away with the > easy to parse Python block statement syntax > > 2. why can't we use the old 'for x,y,z in something:' syntax > and instead add iterators to the objects in question ? > > for key, value in object.iterator(): > ... > > this doesn't only look better, it also allows having different > iterators for different tasks (e.g. to iterate over values, key, > items, row in a matrix, etc.) This should become the PEP. I propose that we try to keep this discussion off python-dev, and that the PEP author(s?) set up a separate discussion list (e.g. at egroups) to keep the PEP feedback coming. I promise I'll subscribe to such a list. --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Mon Feb 5 16:01:28 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Mon, 5 Feb 2001 10:01:28 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? In-Reply-To: <01d001c08f81$ec4d83b0$0900a8c0@SPIFF>; from fredrik@pythonware.com on Mon, Feb 05, 2001 at 03:42:51PM +0100 References: <200102051430.JAA17890@w20-575-36.mit.edu> <200102051434.JAA31491@cj20424-a.reston1.va.home.com> <01d001c08f81$ec4d83b0$0900a8c0@SPIFF> Message-ID: <20010205100128.A23746@thyrsus.com> Fredrik Lundh 
                              
                              : > guido wrote: > > BTW, what do you mean by "upstream"? > > looks like freebsd lingo: the original maintainer of a > piece of software (outside the bsd universe). Debian lingo, too. Hmm...maybe this needs to go into the Jargon File. Yes, it does. I just added: @hd{upstream} @g{adj.} @p{} [common] Towards the original author(s) or maintainer(s) of a project. Used in connection with software that is distributed both in its original source form and in derived, adapted versions through a distribution like Debian Linux or one of the BSD ports that has component maintainers for each of their parts. When a component maintainer receives a bug report or patch, he may choose to retain the patch as a porting tweak to the distribution's derivative of the project, or to pass it upstream to the project's maintainer. The antonym @d{downstream} is rare. @comment ESR (seen on the Debian and Python lists) -- 
                              Eric S. Raymond You [should] not examine legislation in the light of the benefits it will convey if properly administered, but in the light of the wrongs it would do and the harm it would cause if improperly administered -- Lyndon Johnson, former President of the U.S. From nas at arctrix.com Mon Feb 5 16:02:22 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 5 Feb 2001 07:02:22 -0800 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: <200102050447.XAA28915@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sun, Feb 04, 2001 at 11:47:26PM -0500 References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> Message-ID: <20010205070222.A5287@glacier.fnational.com> On Sun, Feb 04, 2001 at 11:47:26PM -0500, Guido van Rossum wrote: > Yes, I've often thought that we should be able to heal the split for > 95% by using a few well-aimed tricks like this. Later... I was playing around this weekend with the class/type problem. Without too much effort I had an interpreter that could to things like this: >>> class MyInt(type(1)): ... pass ... >>> i = MyInt(10) >>> i 10 >>> i + 1 11 The major changes were allowing PyClassObject to subclass types (ie. changing PyClass_Check(op) to (PyClass_Check(op) || PyType_Check(op))), writing a _PyType_Lookup function, and making class_lookup use it. The experiment has convinced me that we can allow subclasses of types quite easily without major changes. It has also given me some ideas on "the right way" to solve this problem. The rough scheme I can up yesterday goes like this: PyObject { int ob_refcnt; PyClass ob_class; } PyClass { PyObject_HEAD char *cl_name; getattrfunc cl_getattr; PyMethodTable *cl_methods; } PyMethodTable { binaryfunc nb_add; binaryfunc nb_sub; ... } When calling a method on a object the interpreter would first check for a direct method and if that does not exist then call cl_getattr. Obviously there are still a few details to be worked out. :-) Neil From guido at digicool.com Mon Feb 5 16:04:07 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 10:04:07 -0500 Subject: "backward compatibility" defined (was Re: [Python-Dev] Identifying magic prefix on Python files?) In-Reply-To: Your message of "Mon, 05 Feb 2001 11:30:20 +0100." 
                              
                              References: 
                              
                              Message-ID: <200102051504.KAA31805@cj20424-a.reston1.va.home.com> > Thinking as a commercial software vendor: "Backward compatibility" > means to me, that I can choose a stable version of Python (say 1.5.2, > since this is what comes with the Linux Distros SuSE 6.2, 6.3, 6.4 > and 7.0 or RedHat 6.2, 7.0 is still in use on 98% of our customer > machines), generate .pyc-Files with this and than future stable > versions of Python will be able to import and run these files, if I > payed proper attention to possible incompatibilities like for > example '[].append((one, two))'. Alas, for technical reasons, bytecode generated by different Python versions is *not* binary compatible. > Otherwise the vendor company has to fall back to one of the following > "solutions": > 1. provide a bunch of different versions of bytecode-Archives for each > version of Python (a nightmare). > or 2. has to distribute the Python sources of its application (which is > impossible due to the companies policy) Remember that Python is an Open Source language. I assume that you are talking about your company. So I understand that this company doesn't underwrite the Open Source principles. That's fine, and I am all for different business models. But as your company is not paying for Python, and apparently not willing to sharing its own source code, I don't feel responsible to fix this inconvenience for them. Now, if you were to contribute a backwards compatibility patch that allowed e.g. importing bytecode generated by Python 1.5.2 into Python 2.1, I would gladly incorporate that! My priorities are often affected by what people are willing to contribute... --Guido van Rossum (home page: http://www.python.org/~guido/) From nas at arctrix.com Mon Feb 5 16:28:18 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 5 Feb 2001 07:28:18 -0800 Subject: [Python-Dev] insertdict slower? In-Reply-To: <3A7E84D3.4D111F0F@lemburg.com>; from mal@lemburg.com on Mon, Feb 05, 2001 at 11:47:47AM +0100 References: 
                              
                              <3A7E84D3.4D111F0F@lemburg.com> Message-ID: <20010205072818.B5287@glacier.fnational.com> On Mon, Feb 05, 2001 at 11:47:47AM +0100, M.-A. Lemburg wrote: > Yes, I ran the tests on an AMK K6 233. Our model is a bit older. Neil -- import binascii; print binascii.unhexlify('4a' '75737420616e6f7468657220507974686f6e20626f74') From alex_c at MIT.EDU Mon Feb 5 16:36:29 2001 From: alex_c at MIT.EDU (Alex Coventry) Date: Mon, 5 Feb 2001 10:36:29 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? Message-ID: <200102051536.KAA18060@w20-575-36.mit.edu> > This functionality is alrady available through the os.spawn*() family > of functions. This is supported on Unix and Windows. Oh, I see, I can use the P_WAIT option. Sorry to bother you all, then. Alex. From gvwilson at ca.baltimore.com Mon Feb 5 16:42:50 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 5 Feb 2001 10:42:50 -0500 Subject: [Python-Dev] re: BOFs / sets / iteration Message-ID: <000001c08f8a$4c715b10$770a0a0a@nevex.com> Hi, folks. Given feedback so far, I'd like to hold the BOF on sets at lunch on Wednesday; I'll ask Barbara Fuller to arrange a room, and send out notice. I'd also like to know if there's enough interest in iterators to arrange a BOF for Tuesday lunch (the only other slot that's available right now). Please let me know; if I get more than half a dozen responses, I'll ask Barbara to set that up as well. Thanks Greg From akuchlin at cnri.reston.va.us Mon Feb 5 16:48:04 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 5 Feb 2001 10:48:04 -0500 Subject: [Python-Dev] insertdict slower? In-Reply-To: <20010205072818.B5287@glacier.fnational.com>; from nas@arctrix.com on Mon, Feb 05, 2001 at 07:28:18AM -0800 References: 
                              
                              <3A7E84D3.4D111F0F@lemburg.com> <20010205072818.B5287@glacier.fnational.com> Message-ID: <20010205104804.D733@thrak.cnri.reston.va.us> On Mon, Feb 05, 2001 at 07:28:18AM -0800, Neil Schemenauer wrote: >On Mon, Feb 05, 2001 at 11:47:47AM +0100, M.-A. Lemburg wrote: >> Yes, I ran the tests on an AMK K6 233. Hey, give my computer back! --amk From guido at digicool.com Mon Feb 5 16:46:44 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 10:46:44 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: Your message of "Sun, 04 Feb 2001 23:58:28 EST." 
                              
                              References: 
                              
                              Message-ID: <200102051546.KAA32113@cj20424-a.reston1.va.home.com> > Don't know about Macs (although I believe the Metrowerks libc can be still > be *configured* to swap \r and \n there), but it caught a bug in Python in > the 2.0 release cycle (where Python was opening .pyc files in text mode by > mistake, but only on Windows). Well, actually, it didn't catch anything, it > caused import from .pyc to fail silently. Having *some* specific gross > thing fail every time is worth something. Sounds to me that we'd caught this sooner without the \r\n gimmic. :-) > But the \r\n thingie can be pushed into the extended header instead. Here's > an idea for "the new" magic number, assuming it must remain 4 bytes: > > byte 0: \217 will never change > byte 1: 'P' will never change > byte 2: high-order byte of version number > byte 3: low-order byte of version number > > "Version number" is an unsigned 16-bit int, starting at 0 and incremented by > 1 from time to time. 64K changes may even be enough to get us to Python > 3000 
                              
                              . A separate text file should record the history of version > number changes, associating each with the date, release and reason for > change (the CVS log for import.c used to be good about recording the reason, > but not anymore). > > Then we can keep a 4-byte magic number, Eric can have his invariant two-byte > tag at the start, and it's still possible to compare "version numbers" > easily for more than just equality (read the magic number as a "network > standard" unsigned int, and it's a total ordering with earlier versions > comparing less than later ones). The other nifty PNG sanity-checking tricks > can also move into the extended header. +1 from me. I'm +0 on adding more magic to the marshalled code. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 5 16:55:39 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 10:55:39 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? In-Reply-To: Your message of "Mon, 05 Feb 2001 09:57:03 EST." <200102051457.JAA17949@w20-575-36.mit.edu> References: <200102051430.JAA17890@w20-575-36.mit.edu> <200102051434.JAA31491@cj20424-a.reston1.va.home.com> <200102051457.JAA17949@w20-575-36.mit.edu> Message-ID: <200102051555.KAA32193@cj20424-a.reston1.va.home.com> > > This functionality is alrady available through the os.spawn*() family > > of functions. This is supported on Unix and Windows. > > Hi, Guido. The only problem with os.spawn* is that it forks off a new > process, and I don't know how to wait for the new process to finish. Use os.P_WAIT for the mode argument. > > BTW, what do you mean by "upstream"? > > I thought it might be a useful thing to include in the python > distribution. Which is hardly "upstream" from python-dev -- this is where it's decided! :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Mon Feb 5 17:10:33 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Mon, 5 Feb 2001 11:10:33 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <200102051546.KAA32113@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 05, 2001 at 10:46:44AM -0500 References: 
                              
                              <200102051546.KAA32113@cj20424-a.reston1.va.home.com> Message-ID: <20010205111033.A24383@thyrsus.com> Guido van Rossum 
                              
                              : > > But the \r\n thingie can be pushed into the extended header > > instead. Here's an idea for "the new" magic number, assuming it > > must remain 4 bytes: > > > > byte 0: \217 will never change > > byte 1: 'P' will never change > > byte 2: high-order byte of version number > > byte 3: low-order byte of version number > > > > "Version number" is an unsigned 16-bit int, starting at 0 and > > incremented by 1 from time to time. 64K changes may even be > > enough to get us to Python 3000 
                              
                              . A separate text file > > should record the history of version number changes, associating > > each with the date, release and reason for change (the CVS log for > > import.c used to be good about recording the reason, but not > > anymore). > > > > Then we can keep a 4-byte magic number, Eric can have his > > invariant two-byte tag at the start, and it's still possible to > > compare "version numbers" easily for more than just equality (read > > the magic number as a "network standard" unsigned int, and it's a > > total ordering with earlier versions comparing less than later > > ones). The other nifty PNG sanity-checking tricks can also move > > into the extended header. > > +1 from me. I'm +0 on adding more magic to the marshalled code. Likewise from me -- that is, +1 on Tim's proposed format and +0 on stuff like hashes and embedded source pathnames and stuff. As Tim observed earlier, I just want to see some progress made; I'm not picky about the low-level details on this one, though I'll be happy with the invariant tag and the PNG-style sanity check. -- 
                              Eric S. Raymond "Extremism in the defense of liberty is no vice; moderation in the pursuit of justice is no virtue." -- Barry Goldwater (actually written by Karl Hess) From mal at lemburg.com Mon Feb 5 17:58:21 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 17:58:21 +0100 Subject: [Python-Dev] insertdict slower? References: 
                              
                              <3A7E84D3.4D111F0F@lemburg.com> <20010205072818.B5287@glacier.fnational.com> <20010205104804.D733@thrak.cnri.reston.va.us> Message-ID: <3A7EDBAD.95BCA583@lemburg.com> Andrew Kuchling wrote: > > On Mon, Feb 05, 2001 at 07:28:18AM -0800, Neil Schemenauer wrote: > >On Mon, Feb 05, 2001 at 11:47:47AM +0100, M.-A. Lemburg wrote: > >> Yes, I ran the tests on an AMK K6 233. > > Hey, give my computer back! :-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From Jason.Tishler at dothill.com Mon Feb 5 18:27:21 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Mon, 5 Feb 2001 12:27:21 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: 
                              
                              ; from tim.one@home.com on Sun, Feb 04, 2001 at 03:13:29AM -0500 References: <14972.10746.34425.26722@anthem.wooz.org> 
                              
                              Message-ID: <20010205122721.J812@dothill.com> On Sun, Feb 04, 2001 at 03:13:29AM -0500, Tim Peters wrote: > [Barry A. Warsaw] > > So, let's tease out what the Right solution would be, and then > > see how close or if we can get there for 2.1. I've no clue what > > behavior Mac and Windows users would /like/ to see -- what would > > be most natural for them? On 2001-Jan-11 07:56, Jason Tishler wrote: > I have created a (hacky) patch, that solves this problem for both Cygwin and > Win32. I can redo it so that it only affects Cygwin and leaves the Win32 > functionality alone. I would like to upload it for discussion... Part of my motivation when submitting patch 103154, was to attempt to elicit the "right" solution. > I don't understand what Cygwin does; here from a Cygwin bash shell session: > > ... > > So best I can tell, they're like Steven: working with a case-insensitive > filesystem but trying to make Python insist that it's not, and what basic > tools there do about case is seemingly random (wc doesn't care, shell > expansion does, touch doesn't, rm doesn't (not shown) -- maybe it's just > shell expansion that's trying to pretend this is Unix? Sorry, but I don't agree with your assessment that Cygwin's treatment of case is "seemingly random." IMO, Cygwin behaves appropriately regarding case for a case-insensitive, but case-preserving file system. The only "inconsistency" that you found is just one of bash's idiosyncrasies -- how it handles glob-ing. Note that one can use "shopt -s nocaseglob" to get case-insensitive glob-ing with bash on Cygwin *and* UNIX. > So I view the current rules as inexplicable: they're neither > platform-independent nor consistent with the platform's natural behavior > (unless that platform has case-sensitive filesystem semantics). Agreed. > Bottom line: for the purpose of import-from-file (and except for > case-destroying filesystems, where PYTHONCASEOK is the only hope), we *can* > make case-insensitive case-preserving filesystems "act like" they were > case-sensitive with modest effort. I feel that the above behavior would be best for Cygwin Python. I hope that Steven's patch (i.e., 103495) or a modified version of it remains as part of Python CVS. > We can't do the reverse. That would > lead to explainable rules and maximal portability. Sorry but I don't grok the above. Tim, can you try again? BTW, importing of builtin modules is case-sensitive even on platforms such as Windows. Wouldn't it be more consistent if all imports regardless of type were case-sensitive? Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From akuchlin at mems-exchange.org Mon Feb 5 18:32:31 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Mon, 05 Feb 2001 12:32:31 -0500 Subject: [Python-Dev] PEP announcements, and summaries Message-ID: 
                              
                              One thing about the reaction to the 2.1 alphas is that many people seem *surprised* by some of the changes, even though PEPs have been written, discussed, and mentioned in python-dev summaries. Maybe the PEPs and their status need to be given higher visibility; I'd suggest sending a brief note of status changes (new draft PEPs, acceptance, rejection) to comp.lang.python.announce. Also, I'm wondering if it's worth continuing the python-dev summaries, because, while they get a bunch of hits on news sites such as Linux Today and may be good PR, I'm not sure that they actually help Python development. They're supposed to let people offer timely comments on python-dev discussions while it's still early enough to do some good, but that doesn't seem to happen; I don't see python-dev postings that began with something like "The last summary mentioned you were talking about X. I use X a lot, and here's what I think: ...". Is anything much lost if the summaries cease? --amk From esr at thyrsus.com Mon Feb 5 18:56:59 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Mon, 5 Feb 2001 12:56:59 -0500 Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: 
                              
                              ; from akuchlin@mems-exchange.org on Mon, Feb 05, 2001 at 12:32:31PM -0500 References: 
                              
                              Message-ID: <20010205125659.B25297@thyrsus.com> Andrew Kuchling 
                              
                              : > Is anything much lost if the summaries cease? I think not, but others may differ. -- 
                              Eric S. Raymond Conservatism is the blind and fear-filled worship of dead radicals. From fredrik at effbot.org Mon Feb 5 19:10:15 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Mon, 5 Feb 2001 19:10:15 +0100 Subject: [Python-Dev] Case sensitive import. References: <14972.10746.34425.26722@anthem.wooz.org> 
                              
                              <20010205122721.J812@dothill.com> Message-ID: <028701c08f9e$e65886e0$e46940d5@hagrid> Jason wrote: > BTW, importing of builtin modules is case-sensitive even on platforms > such as Windows. Wouldn't it be more consistent if all imports > regardless of type were case-sensitive? umm. what kind of imports are not case-sensitive today? >>> import strOP # builtin Traceback (innermost last): File "
                              
                              ", line 1, in ? ImportError: No module named strOP >>> import stringIO # python Traceback (innermost last): File "
                              
                              ", line 1, in ? NameError: Case mismatch for module name stringIO (filename C:\py152\lib\StringIO.py) >>> import _Tkinter # binary extension Traceback (innermost last): File "
                              
                              ", line 1, in ? NameError: Case mismatch for module name _Tkinter (filename C:\py152\_tkinter.pyd) Cheers /F From pedroni at inf.ethz.ch Mon Feb 5 19:20:33 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Mon, 5 Feb 2001 19:20:33 +0100 (MET) Subject: [Python-Dev] PEP announcements, and summaries Message-ID: <200102051820.TAA20238@core.inf.ethz.ch> Hi. > One thing about the reaction to the 2.1 alphas is that many people > seem *surprised* by some of the changes, even though PEPs have been > written, discussed, and mentioned in python-dev summaries. Maybe the > PEPs and their status need to be given higher visibility; I'd suggest > sending a brief note of status changes (new draft PEPs, acceptance, > rejection) to comp.lang.python.announce. > > Also, I'm wondering if it's worth continuing the python-dev summaries, > because, while they get a bunch of hits on news sites such as Linux > Today and may be good PR, I'm not sure that they actually help Python > development. They're supposed to let people offer timely comments on > python-dev discussions while it's still early enough to do some good, > but that doesn't seem to happen; I don't see python-dev postings that > began with something like "The last summary mentioned you were talking > about X. I use X a lot, and here's what I think: ...". Is anything > much lost if the summaries cease? > Before joining python-dev, I always read the summaries very carefully and I found them useful and informing, on the other hand my situation of being a jython devel was a bit special. Some opinions from a somehow external viewpoint: - more emphasis on the PEPs and their status changes could help. - people should be able to trust PEP contents, they should really describe what is going happen. Two examples: - what was described in weak-ref PEP was changed just before realesing the alpha that contained weak-ref support, because it was discovered that the proposal could not be implemented in jython. - nested scope PEP: the PEP indicated as most likely impl. way flat closures, and that'a what is in a2. from _ import * was not indicated as a big issue. Now that seems such an issue, and maybe chained closures are needed or some other gymnic with a performance impact. Now decisions and changes have to be made under time constraints and it seems not clear what the outcome will be, and wheter it will have the required long-term quality. regards, Samuele Pedroni. From mal at lemburg.com Mon Feb 5 19:32:00 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 19:32:00 +0100 Subject: [Python-Dev] PEP announcements, and summaries References: 
                              
                              Message-ID: <3A7EF1A0.EDA4AD24@lemburg.com> Andrew Kuchling wrote: > > One thing about the reaction to the 2.1 alphas is that many people > seem *surprised* by some of the changes, even though PEPs have been > written, discussed, and mentioned in python-dev summaries. Maybe the > PEPs and their status need to be given higher visibility; I'd suggest > sending a brief note of status changes (new draft PEPs, acceptance, > rejection) to comp.lang.python.announce. > > Also, I'm wondering if it's worth continuing the python-dev summaries, > because, while they get a bunch of hits on news sites such as Linux > Today and may be good PR, I'm not sure that they actually help Python > development. They're supposed to let people offer timely comments on > python-dev discussions while it's still early enough to do some good, > but that doesn't seem to happen; I don't see python-dev postings that > began with something like "The last summary mentioned you were talking > about X. I use X a lot, and here's what I think: ...". Is anything > much lost if the summaries cease? I think that the Python community would lose some touch with the Python development process and there are currently no other clearly visible resources which a Python user can link to unless he or she happens to know of the existence of python-dev. Some things which could be done to improve this: * add a link to the python-dev archive directly from www.python.org * summarize the development process somewhere on python.org and add a link "development" to the page titles * fix the "community" link to point to a page which provides links to all the community tools available for Python on the web, e.g. Starship, Parnassus, SF-tools, FAQTS, etc. * add a section "devtools" which points programmers to existing Python programming tools such as IDLE, PythonWare, Wing IDE, BlackAdder, etc. And while I'm at it :) * add a section "applications" to produce some more awareness that Python is being used in real life applications * some kind of self-maintained projects page would also be a nice thing to have, e.g. a Wiki-style reference to projects seeking volunteers to help; this could also be referenced in the community section -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From esr at thyrsus.com Mon Feb 5 19:42:30 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Mon, 5 Feb 2001 13:42:30 -0500 Subject: [Python-Dev] Heads up on library reorganization Message-ID: <20010205134230.A25426@thyrsus.com> At LWE, Guido and I brainstormed a thorough reorganization of the Python library together. There will be a PEP coming out of this; actually two PEPs. One will reorganize the library namespace and set up procedures for forward migration and future changes. Another (not yet begun) will describe policy criteria for what goes into the library in the future. The draft on reorganization is still kind of raw, but I'll share it with anyone that has a particular interest in this area. We have a new library-hierarchy map already, but I'm deliberately not posting that publicly yet in order to avoid starting a huge debate about the details before Guido and I actually have a well-worked-out proposal to present. Guido, of course, is still up to his ears in post-LWE mail and work cleanup. Barry, this is why I have not submitted the ternary-select PEP yet. The library reorg is more important and should get done first. -- 
                              Eric S. Raymond Everything you know is wrong. But some of it is a useful first approximation. From guido at digicool.com Mon Feb 5 19:37:39 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 13:37:39 -0500 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: Your message of "Mon, 05 Feb 2001 07:02:22 PST." <20010205070222.A5287@glacier.fnational.com> References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> Message-ID: <200102051837.NAA00833@cj20424-a.reston1.va.home.com> > On Sun, Feb 04, 2001 at 11:47:26PM -0500, Guido van Rossum wrote: > > Yes, I've often thought that we should be able to heal the split for > > 95% by using a few well-aimed tricks like this. Later... > > I was playing around this weekend with the class/type problem. > Without too much effort I had an interpreter that could to things > like this: > > >>> class MyInt(type(1)): > ... pass > ... > >>> i = MyInt(10) > >>> i > 10 > >>> i + 1 > 11 Now, can you do things like this: >>> from types import * >>> class MyInt(IntType): # add a method def add1(self): return self+1 >>> i = MyInt(10) >>> i.add1() 11 >>> and like this: >>> class MyInt(IntType): # override division def __div__(self, other): return float(self) / other def __rdiv__(self, other): return other / float(self) >>> i = MyInt(10) >>> i/3 0.33333333333333331 >>> I'm not asking for adding new instance variables (slots), but that of course would be the next step of difficulty up. > The major changes were allowing PyClassObject to subclass types > (ie. changing PyClass_Check(op) to (PyClass_Check(op) || > PyType_Check(op))), writing a _PyType_Lookup function, and making > class_lookup use it. Yeah, but that's still nasty. We should strive for unifying PyClass and PyType instead of having both around. > The experiment has convinced me that we can allow subclasses of > types quite easily without major changes. It has also given me > some ideas on "the right way" to solve this problem. The rough > scheme I can up yesterday goes like this: > p> PyObject { > int ob_refcnt; > PyClass ob_class; (plus type-specific fields I suppose) > } > > PyClass { > PyObject_HEAD > char *cl_name; > getattrfunc cl_getattr; > PyMethodTable *cl_methods; > } > > PyMethodTable { > binaryfunc nb_add; > binaryfunc nb_sub; > ... > } > > When calling a method on a object the interpreter would first > check for a direct method and if that does not exist then call > cl_getattr. Obviously there are still a few details to be worked > out. :-) Yeah... Like you should be able to ask for ListType.append and get an unbound built-in method back, which can be applied to a list: ListType.append([], 1) === [].append(1) And ditto for operators: IntType.__add__(1, 2) === 1+2 And a C API like PyNumber_Add(x, y) should be equivalent to using x.__add__(y), too. --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at lemburg.com Mon Feb 5 19:45:10 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 19:45:10 +0100 Subject: [Python-Dev] re: BOFs / sets / iteration References: <000001c08f8a$4c715b10$770a0a0a@nevex.com> Message-ID: <3A7EF4B6.9BBD45EC@lemburg.com> Greg Wilson wrote: > > Hi, folks. Given feedback so far, I'd like to hold the > BOF on sets at lunch on Wednesday; I'll ask Barbara Fuller > to arrange a room, and send out notice. Great. > I'd also like to know if there's enough interest in iterators > to arrange a BOF for Tuesday lunch (the only other slot that's > available right now). Please let me know; if I get more than > half a dozen responses, I'll ask Barbara to set that up as well. That's one from me :) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From nas at arctrix.com Mon Feb 5 20:04:22 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 5 Feb 2001 11:04:22 -0800 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: <200102051837.NAA00833@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 05, 2001 at 01:37:39PM -0500 References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> <200102051837.NAA00833@cj20424-a.reston1.va.home.com> Message-ID: <20010205110422.A5893@glacier.fnational.com> On Mon, Feb 05, 2001 at 01:37:39PM -0500, Guido van Rossum wrote: > Now, can you do things like this: [example cut] No, it would have to be written like this: >>> from types import * >>> class MyInt(IntType): # add a method def add1(self): return self.value+1 >>> i = MyInt(10) >>> i.add1() 11 >>> Note the value attribute. The IntType.__init__ method is basicly: def __init__(self, value): self.value = value > > PyObject { > > int ob_refcnt; > > PyClass ob_class; > > (plus type-specific fields I suppose) Yes, the instance attributes. In this scheme all objects are instances of some class. > Yeah... Like you should be able to ask for ListType.append and get an > unbound built-in method back, which can be applied to a list: > > ListType.append([], 1) === [].append(1) Right. My changes on the weekend where quite close to making this work. Neil From ping at lfw.org Mon Feb 5 20:04:16 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Mon, 5 Feb 2001 11:04:16 -0800 (PST) Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: <000301c08eb5$876baf20$770a0a0a@nevex.com> Message-ID: 
                              
                              On Sun, 4 Feb 2001, Greg Wilson wrote: > Question: would the current proposal allow NumPy arrays (just as an > example) to support both: > > for index:value in numPyArray: > > where 'index' would get tuples like '(0, 3, 2)' for a 3D array, *and* > > for (i, j, k):value in numPyArray: Naturally. Anything that could normally be bound on the left side of an assignment (or current for loop) could go in the spot on either side of the colon. -- ?!ng From akuchlin at cnri.reston.va.us Mon Feb 5 20:11:39 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 5 Feb 2001 14:11:39 -0500 Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: <3A7EF1A0.EDA4AD24@lemburg.com>; from mal@lemburg.com on Mon, Feb 05, 2001 at 07:32:00PM +0100 References: 
                              
                              <3A7EF1A0.EDA4AD24@lemburg.com> Message-ID: <20010205141139.K733@thrak.cnri.reston.va.us> On Mon, Feb 05, 2001 at 07:32:00PM +0100, M.-A. Lemburg wrote: >Some things which could be done to improve this: >* add a link to the python-dev archive directly from www.python.org >* summarize the development process somewhere on python.org and > add a link "development" to the page titles We do need a set of "Hacker's Guide to Python Development" Web pages to collect that sort of thing; I have some small pieces of such a thing, written long ago and never released, but they'd need to be updated and finished off. And while I'm at it, too, I'd like to suggest that, since python-dev seems to be getting out of touch with the larger Python community, after 2.1final, rather than immediately leaping back into language hacking, we should work on bringing the public face of the community up to date: * Pry python.org out of CNRI's cold dead hands, and begin maintaining it again. * Start moving on the Catalog-SIG again (yes, I know this is my task) * Work on the Batteries Included proposals & required infrastructure * Try doing some PR for 2.1. --amk From ping at lfw.org Mon Feb 5 20:15:18 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Mon, 5 Feb 2001 11:15:18 -0800 (PST) Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: <3A7E89B9.B90D36DF@lemburg.com> Message-ID: 
                              
                              On Mon, 5 Feb 2001, M.-A. Lemburg wrote: > Two things: > > 1. the proposed syntax key:value does away with the > easy to parse Python block statement syntax Oh, come on. Slices and dictionary literals use colons too, and there's nothing wrong with that. Blocks are introduced by a colon at the *end* of a line. > 2. why can't we use the old 'for x,y,z in something:' syntax > and instead add iterators to the objects in question ? > > for key, value in object.iterator(): > ... Because there's no good answer for "what does iterator() return?" in this design. (Trust me; i did think this through carefully.) Try it. How would you implement the iterator() method? The PEP *is* suggesting that we add iterators to the objects -- just not that we explicitly call them. In the 'for' loop you've written, iterator() returns a sequence, not an iterator. -- ?!ng From gvwilson at ca.baltimore.com Mon Feb 5 20:22:50 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 5 Feb 2001 14:22:50 -0500 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: 
                              
                              Message-ID: <002201c08fa9$079a1f80$770a0a0a@nevex.com> > > Question: would the current proposal allow NumPy arrays (just as an > > example) to support both: > > for index:value in numPyArray: > > where 'index' would get tuples like '(0, 3, 2)' for a 3D > > array, *and* > > > > for (i, j, k):value in numPyArray: > Ka-Ping Yee: > Naturally. Anything that could normally be bound on the left > side of an assignment (or current for loop) could go in the > spot on either side of the colon. OK, now here's the hard one. Clearly, (a) for i in someList: has to continue to mean "iterate over the values". We've agreed that: (b) for k:v in someDict: means "iterate through the items". (a) looks like a special case of (b). I therefore asked my colleagues to guess what: (c) for x in someDict: did. They all said, "Iterates through the _values_ in the dict", by analogy with (a). I then asked, "How do you iterate through the keys in a dict, or the indices in a list?" They guessed: (d) for x: in someContainer: (note the colon trailing the iterator variable name). I think that the combination of (a) and (b) implies (c), which leads in turn to (d). Is this what we want? I gotta say, when I start thinking about how many problems my students are going to bring me when accidentally adding or removing a colon in the middle of a 'for' statement changes the iteration space from keys to values, and I start feeling queasy... Thanks, Greg From ping at lfw.org Mon Feb 5 20:26:53 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Mon, 5 Feb 2001 11:26:53 -0800 (PST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <002201c08fa9$079a1f80$770a0a0a@nevex.com> Message-ID: 
                              
                              On Mon, 5 Feb 2001, Greg Wilson wrote: > OK, now here's the hard one. Clearly, > > (a) for i in someList: > > has to continue to mean "iterate over the values". We've agreed > that: > > (b) for k:v in someDict: > > means "iterate through the items". (a) looks like a special case > of (b). I therefore asked my colleagues to guess what: > > (c) for x in someDict: > > did. They all said, "Iterates through the _values_ in the dict", > by analogy with (a). > > I then asked, "How do you iterate through the keys in a dict, or > the indices in a list?" They guessed: > > (d) for x: in someContainer: > > (note the colon trailing the iterator variable name). I think that > the combination of (a) and (b) implies (c), which leads in turn to > (d). Is this what we want? I gotta say, when I start thinking about > how many problems my students are going to bring me when accidentally > adding or removing a colon in the middle of a 'for' statement changes > the iteration space from keys to values, and I start feeling queasy... The PEP explicitly proposes that (c) be an error, because i anticipated and specifically wanted to avoid this ambiguity. Have you had a good look at it? I think your survey shows that the PEP made the right choices. That is, it supports the position that if 'for key:value' is supported, then 'for key:' and 'for :value' should be supported, but 'for x in dict:' should not. It also shows that 'for index:' should be supported on sequences, which the PEP suggests. -- ?!ng From tim.one at home.com Mon Feb 5 20:37:43 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 14:37:43 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <3A7E8B37.E855DF81@lemburg.com> Message-ID: 
                              
                              [M.-A. Lemburg] > Side note: the magic can also change due to command line options > being used, e.g. -U will bump the magic number by 1. Note that this (-U) is the only such case. Unless people are using private Python variants and adding their own cmdline switches that fiddle the magic number 
                              
                              . From tim.one at home.com Mon Feb 5 20:37:46 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 14:37:46 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <200102051546.KAA32113@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              > > byte 0: \217 will never change > > byte 1: 'P' will never change > > byte 2: high-order byte of version number > > byte 3: low-order byte of version number [Guido] > +1 from me. I'm +0 on adding more magic to the marshalled code. Note that the suggested scheme cannot tolerate -U magically adding 1 to the magic number, without getting strained ("umm, OK, we'll bump it by 2 when we do it by hand, and then -U gets all the odd numbers"; "umm, OK, we'll use 'P' for regular Python and 'U' for Unicode Python"; etc). So I say the marshalled code at least needs to grow a flag field to handle -U and any future extensions. The "extended header" in the marshalled blob should also begin with a 4-byte field giving the length of the extended header. plan-for-change-ly y'rs - tim From guido at digicool.com Mon Feb 5 20:37:28 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 14:37:28 -0500 Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: Your message of "Mon, 05 Feb 2001 14:11:39 EST." <20010205141139.K733@thrak.cnri.reston.va.us> References: 
                              
                              <3A7EF1A0.EDA4AD24@lemburg.com> <20010205141139.K733@thrak.cnri.reston.va.us> Message-ID: <200102051937.OAA01402@cj20424-a.reston1.va.home.com> > On Mon, Feb 05, 2001 at 07:32:00PM +0100, M.-A. Lemburg wrote: > >Some things which could be done to improve this: > >* add a link to the python-dev archive directly from www.python.org > >* summarize the development process somewhere on python.org and > > add a link "development" to the page titles Andrew: > We do need a set of "Hacker's Guide to Python Development" Web pages > to collect that sort of thing; I have some small pieces of such a > thing, written long ago and never released, but they'd need to be > updated and finished off. > > And while I'm at it, too, I'd like to suggest that, since python-dev > seems to be getting out of touch with the larger Python community, > after 2.1final, rather than immediately leaping back into language > hacking, we should work on bringing the public face of the community > up to date: > > * Pry python.org out of CNRI's cold dead hands, and begin maintaining > it again. Agreed. I am getting together with some folks at Digital Creations this week to get started with a Zope-based python.org website (to be run at new.python.org for now). This will be run somewhat like zope.org, i.e. members can post their own contents in their home directory, and after review such items can be linked directly from the home page, or something like that. The software to be used is DC's brand new Content Management Framework (announced in a press conference last Thursday; I can't find anything on the web yet). (Hmm, I wonder if we could run this on starship.python.net instead? That machine probably has more spare cycles.) > * Start moving on the Catalog-SIG again (yes, I know this is my task) > > * Work on the Batteries Included proposals & required infrastructure > > * Try doing some PR for 2.1. Joya Subudhi of Foretec has been doing a lot of Python PR work -- she arranged about a dozen press interviews for me last week at LinuxWorld Expo. She can undoubtedly do a good job of pushing the 2.1 announcement into the world, once we've released it. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 5 20:43:45 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 14:43:45 -0500 Subject: [Python-Dev] import Tkinter fails In-Reply-To: Your message of "Mon, 05 Feb 2001 14:35:51 EST." <20010205143551.M733@thrak.cnri.reston.va.us> References: <200102050012.TAA27410@cj20424-a.reston1.va.home.com> <20010205143551.M733@thrak.cnri.reston.va.us> Message-ID: <200102051943.OAA04941@cj20424-a.reston1.va.home.com> > On Sun, Feb 04, 2001 at 07:12:44PM -0500, Guido van Rossum wrote: > >On Unix, either when running from the build directory, or when running > >the installed binary, "import Tkinter" fails. It seems that > >Lib/lib-tk is (once again) dropped from the default path. Andrew replied (in private mail): > Is this the case with the current CVS tree (as of Feb. 5)? I can't > reproduce the problem and don't see why this would happen. Oops... I got rid of my old Modules/Setup, and tried again -- then it worked. I should have heeded the warnings about Setup.dist being newer than Setup! Sorry for the false alarm! --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at lemburg.com Mon Feb 5 20:45:51 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 20:45:51 +0100 Subject: [Python-Dev] re: Sets BOF / for in dict References: 
                              
                              Message-ID: <3A7F02EF.9119F46C@lemburg.com> Ka-Ping Yee wrote: > > On Mon, 5 Feb 2001, M.-A. Lemburg wrote: > > Two things: > > > > 1. the proposed syntax key:value does away with the > > easy to parse Python block statement syntax > > Oh, come on. Slices and dictionary literals use colons too, > and there's nothing wrong with that. Blocks are introduced > by a colon at the *end* of a line. Slices and dictionary enclose the two parts in brackets -- this places the colon into a visible context. for ... in ... : does not provide much of a context. > > 2. why can't we use the old 'for x,y,z in something:' syntax > > and instead add iterators to the objects in question ? > > > > for key, value in object.iterator(): > > ... > > Because there's no good answer for "what does iterator() return?" > in this design. (Trust me; i did think this through carefully.) > Try it. How would you implement the iterator() method? The .iterator() method would have to return an object which provides an iterator API (at C level to get the best performance). For dictionaries, this object could carry the needed state (current position in the dictionary table) and use the PyDict_Next() for the internals. Matrices would have to carry along more state (one integer per dimension) and could access the internal matrix representation directly using C functions. This would give us: speed, flexibility and extensibility which the syntax hacks cannot provide; e.g. how would you specify to iterate backwards over a sequence using that notation or diagonal for a matrix ? > The PEP *is* suggesting that we add iterators to the objects -- > just not that we explicitly call them. In the 'for' loop you've > written, iterator() returns a sequence, not an iterator. No, it should return a forward iterator. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Mon Feb 5 20:49:39 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 14:49:39 -0500 Subject: [Python-Dev] Adding opt-in pymalloc + alpha3 In-Reply-To: <3A7E881E.64D64F08@lemburg.com> Message-ID: 
                              
                              [MAL] > ... > Even though I don't think that adding opt-in code matters > much w/r to stability of the rest of the code, I still think > that we ought to insert a third alpha release to hammer a bit > more on weak refs and nested scopes. > > These two additions are major new features in Python 2.1 which > were added very late in the release cycle and haven't had much > testing in the field. > > Thoughts ? IMO, everyone who is *likely* to pick up an alpha release has already done so. It won't get significantly broader or deeper hammering until there's a beta. So I'm opposed to a third alpha unless a significant number of bugs are unearthed by the current alpha (which still has a couple weeks to go before the scheduled beta). if-you-won't-eat-two-hot-dogs-it-won't-help-if-i-offer-you- three
                              
                              -ly y'rs - tim From mal at lemburg.com Mon Feb 5 20:50:26 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 20:50:26 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: 
                              
                              Message-ID: <3A7F0402.7134C6DF@lemburg.com> Tim Peters wrote: > > [M.-A. Lemburg] > > Side note: the magic can also change due to command line options > > being used, e.g. -U will bump the magic number by 1. > > Note that this (-U) is the only such case. Unless people are using private > Python variants and adding their own cmdline switches that fiddle the magic > number 
                              
                              . I think that future optimizers or special combinations of the yet-to-be-designed Python compiler/VM toolkit will make some use of this feature too. It is currently the only way to prevent the interpreter from loading code which it potentially cannot execute. When redesigning the import magic, we should be careful to allow future combinations of compiler/VM to introduce new opcodes etc. so there will have to be some field for them to use too. The -U trick is really only a hack in that direction (since it modifies the compiler and thus the generated byte code). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From ping at lfw.org Mon Feb 5 20:52:50 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Mon, 5 Feb 2001 11:52:50 -0800 (PST) Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: <3A7F02EF.9119F46C@lemburg.com> Message-ID: 
                              
                              On Mon, 5 Feb 2001, M.-A. Lemburg wrote: > Slices and dictionary enclose the two parts in brackets -- this > places the colon into a visible context. for ... in ... : does > not provide much of a context. For crying out loud! '\':' requires that you tokenize the string before you know that the colon is part of the string. Triple-quotes force you to tokenize carefully too. There is *nothing* that this stay-away-from-colons argument buys you. For a human skimming over source code -- i repeat, the visual hint is "colon at the END of a line". > > Because there's no good answer for "what does iterator() return?" > > in this design. (Trust me; i did think this through carefully.) > > Try it. How would you implement the iterator() method? > > The .iterator() method would have to return an object which > provides an iterator API (at C level to get the best performance). Okay, provide an example. Write this iterator() method in Python. Now answer: how does 'for' know whether the thing to the right of 'in' is an iterator or a sequence? > For dictionaries, this object could carry the needed state > (current position in the dictionary table) and use the PyDict_Next() > for the internals. Matrices would have to carry along more state > (one integer per dimension) and could access the internal > matrix representation directly using C functions. This is already exactly what the PEP proposes for the implementation of sq_iter. > This would give us: speed, flexibility and extensibility > which the syntax hacks cannot provide; The PEP is not just about syntax hacks. It's an iterator protocol. It's clear that you haven't read it. *PLEASE* read the PEP before continuing to discuss it. I quote: | Rationale | | If all the parts of the proposal are included, this addresses many | concerns in a consistent and flexible fashion. Among its chief | virtues are the following three -- no, four -- no, five -- points: | | 1. It provides an extensible iterator interface. | | 2. It resolves the endless "i indexing sequence" debate. | | 3. It allows performance enhancements to dictionary iteration. | | 4. It allows one to provide an interface for just iteration | without pretending to provide random access to elements. | | 5. It is backward-compatible with all existing user-defined | classes and extension objects that emulate sequences and | mappings, even mappings that only implement a subset of | {__getitem__, keys, values, items}. I can take out the Monty Python jokes if you want. I can add more jokes if that will make you read it. Just read it, i beg you. > e.g. how would you > specify to iterate backwards over a sequence using that notation > or diagonal for a matrix ? No differently from what you are suggesting, at the surface: for item in sequence.backwards(): for item in matrix.diagonal(): The difference is that the thing on the right of 'in' is always considered a sequence-like object. There is no ambiguity and no magic rule for deciding when it's a sequence and when it's an iterator. -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From barry at digicool.com Mon Feb 5 21:07:12 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Mon, 5 Feb 2001 15:07:12 -0500 Subject: [Python-Dev] Heads up on library reorganization References: <20010205134230.A25426@thyrsus.com> Message-ID: <14975.2032.104397.905163@anthem.wooz.org> >>>>> "ESR" == Eric S Raymond 
                              
                              writes: ESR> Barry, this is why I have not submitted the ternary-select ESR> PEP yet. The library reorg is more important and should get ESR> done first. No problem, and agreed. Whenever you're ready with a PEP, just send me a draft and I'll give you a number. -Barry From guido at digicool.com Mon Feb 5 21:22:27 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 15:22:27 -0500 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: Your message of "Mon, 05 Feb 2001 11:26:53 PST." 
                              
                              References: 
                              
                              Message-ID: <200102052022.PAA05449@cj20424-a.reston1.va.home.com> [GVW] > > (c) for x in someDict: > > > > did. They all said, "Iterates through the _values_ in the dict", > > by analogy with (a). [Ping] > The PEP explicitly proposes that (c) be an error, because i > anticipated and specifically wanted to avoid this ambiguity. > Have you had a good look at it? > > I think your survey shows that the PEP made the right choices. > That is, it supports the position that if 'for key:value' is > supported, then 'for key:' and 'for :value' should be supported, > but 'for x in dict:' should not. It also shows that 'for index:' > should be supported on sequences, which the PEP suggests. But then we should review the wisdom of using "if x in dict" as a shortcut for "if dict.has_key(x)" again. Everything is tied together! --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 5 21:24:19 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 15:24:19 -0500 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: Your message of "Mon, 05 Feb 2001 11:04:22 PST." <20010205110422.A5893@glacier.fnational.com> References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> <200102051837.NAA00833@cj20424-a.reston1.va.home.com> <20010205110422.A5893@glacier.fnational.com> Message-ID: <200102052024.PAA05474@cj20424-a.reston1.va.home.com> > On Mon, Feb 05, 2001 at 01:37:39PM -0500, Guido van Rossum wrote: > > Now, can you do things like this: > [example cut] > > No, it would have to be written like this: > > >>> from types import * > >>> class MyInt(IntType): # add a method > def add1(self): return self.value+1 > > >>> i = MyInt(10) > >>> i.add1() > 11 > >>> > > Note the value attribute. The IntType.__init__ method is > basicly: > > def __init__(self, value): > self.value = value So, "class MyInt(IntType)" acts as a sort-of automagical "UserInt" class creation? (Analogous to UserList etc.) I'm not sure I like that. Why do we have to have this? --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Mon Feb 5 21:29:43 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 15:29:43 -0500 Subject: [Python-Dev] Heads up on library reorganization In-Reply-To: <20010205134230.A25426@thyrsus.com> Message-ID: 
                              
                              [Eric S. Raymond] > ... > Guido, of course, is still up to his ears in post-LWE mail > and work cleanup. Bad news, but temporary news: The PythonLabs group (incl. Guido) is going to be severely out of touch for the rest of this week, starting at varying times today. So we'll have another giant pile of email to deal with over the weekend, on top of the giant pile left unanswered during the release crunch. (Ping, I'm not ignoring your PEP, I simply haven't gotten to it yet! looks like I won't this week either) So if anyone has been waiting for a chance to pull off a hostile takeover of Python, this is the week! carpe-diem-ly y'rs - tim From nas at arctrix.com Mon Feb 5 21:48:10 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 5 Feb 2001 12:48:10 -0800 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: <200102052024.PAA05474@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 05, 2001 at 03:24:19PM -0500 References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> <200102051837.NAA00833@cj20424-a.reston1.va.home.com> <20010205110422.A5893@glacier.fnational.com> <200102052024.PAA05474@cj20424-a.reston1.va.home.com> Message-ID: <20010205124810.A6285@glacier.fnational.com> On Mon, Feb 05, 2001 at 03:24:19PM -0500, Guido van Rossum wrote: > So, "class MyInt(IntType)" acts as a sort-of automagical "UserInt" > class creation? (Analogous to UserList etc.) I'm not sure I like > that. Why do we have to have this? The problem is where to store the information in the PyIntObject structure. I don't think my solution is great either. Neil From skip at mojam.com Mon Feb 5 21:51:48 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 14:51:48 -0600 (CST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14973.33483.956785.985303@cj42289-a.reston1.va.home.com> References: <14970.60750.570192.452062@beluga.mojam.com> 
                              
                              <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> <14972.36408.800070.656541@beluga.mojam.com> <14973.33483.956785.985303@cj42289-a.reston1.va.home.com> Message-ID: <14975.4708.165467.565852@beluga.mojam.com> I retract my suggested C code for building __all__ lists. I'm using Fred's code instead. Skip From skip at mojam.com Mon Feb 5 21:55:41 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 14:55:41 -0600 (CST) Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <14975.4941.974720.155034@beluga.mojam.com> Andrew> Is anything much lost if the summaries cease? Like Eric said, probably not. Still, before tossing them you might post a note to c.l.py.a that is essentially what you wrote and warn that if people don't chime in with some valid feedback, they will stop. Skip From gvwilson at ca.baltimore.com Mon Feb 5 21:57:05 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 5 Feb 2001 15:57:05 -0500 Subject: [Python-Dev] re: for/iter poll In-Reply-To: <20010205192428.5872BE75D@mail.python.org> Message-ID: <002801c08fb6$321d3a50$770a0a0a@nevex.com> I am teaching Python at the Space Telescope Science Institute on Thurs/Fri this week (Feb 8-9). There will be 20+ students in attendance, most of whom will never have seen Python before (although all have previous programming experience). This is a great opportunity to field-test new syntax for iteration, membership tests, and the like, if interested parties can help me put together questions. I have set up a mailing list at: http://groups.yahoo.com/group/python-iter to handle this discussion (since putting together a questionnaire doesn't belong on python-dev). Please join up and send suggestions; we've got the rest of today, Tuesday, and Wednesday morning... Thanks, Greg From fredrik at pythonware.com Mon Feb 5 22:02:42 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Mon, 5 Feb 2001 22:02:42 +0100 Subject: [Python-Dev] re: for in dict (user expectation poll) References: 
                              
                              <200102052022.PAA05449@cj20424-a.reston1.va.home.com> Message-ID: <042701c08fb6$fd382970$e46940d5@hagrid> > But then we should review the wisdom of using "if x in dict" as a > shortcut for "if dict.has_key(x)" again. Everything is tied together! yeah, don't forget unpacking assignments: assert len(dict) == 3 { k1:v1, k2:v2, k3:v3 } = dict Cheers /F From tim.one at home.com Mon Feb 5 22:01:49 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 16:01:49 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <20010205122721.J812@dothill.com> Message-ID: 
                              
                              [Jason Tishler] > Sorry, but I don't agree with your assessment that Cygwin's treatment > of case is "seemingly random." IMO, Cygwin behaves appropriately > regarding case for a case-insensitive, but case-preserving file system. Sorry, you can't disagree with that 
                              
                              : i.e., you can disagree that Cygwin *is* inconsistent, but you can't tell me it didn't *appear* inconsistent to me the first time I played with it. The latter is just a fact. It doesn't mean it *is* inconsistent. First impressions are what they are. The heart of the question for Python is related, though: you say Cygwin behaves appropriately. Fine. If I "cat FiLe", it will cat a file named "file" or "FILE" or "filE" etc. But at the same time, you want Python to *ignore* "filE.py" when Python does "import FiLe". The behavior you want from Python is then inconsistent with what Cygwin does elsewhere. So if Cygwin's behavior is "appropriate" for the filesystem, then what you want Python to do must be "inappropriate" for the filesystem. That's what I want too, but it *is* inappropriate for the filesystem, and I want to be clear about that. Basic sanity requires that Python do the same thing on *all* case-insensitive case-preserving filesystems, to the fullest extent possible. Python's DOS/Windows behavior has priority by a decade. I'm deadly opposed to making a special wart for Cygwin (or the Mac), but am in favor of changing it on Windows too. >> We can't do the reverse. That would lead to explainable rules >> and maximal portability. > Sorry but I don't grok the above. Tim, can you try again? "That" referred to the sentence before the first one you quoted, although it takes psychic powers to intuit that. That is, in the interest of maximal portability, explainability and predictability, import can make case-insensitive filesystems act as if they were case-sensitive, but it's much harder ("we can't") to make C-S systems act C-I. From tim.one at home.com Mon Feb 5 22:07:15 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 16:07:15 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <028701c08f9e$e65886e0$e46940d5@hagrid> Message-ID: 
                              
                              [Fredrik Lundh] > umm. what kind of imports are not case-sensitive today? fredrik.py and Fredrik.py, both on the path. On Windows it does or doesn't blow up, depending on which one you import and which one is found first on the path. On Unix it always works. Imports on Windows aren't so much case-sensitive as casenannying 
                              
                              . From tim.one at home.com Mon Feb 5 22:11:32 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 16:11:32 -0500 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <042701c08fb6$fd382970$e46940d5@hagrid> Message-ID: 
                              
                              [/F] > yeah, don't forget unpacking assignments: > > assert len(dict) == 3 > { k1:v1, k2:v2, k3:v3 } = dict Yuck. I'm going to suppress that. but-thanks-for-pointing-it-out-ly y'rs - tim From skip at mojam.com Mon Feb 5 22:22:21 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 15:22:21 -0600 (CST) Subject: [Python-Dev] PEPS, version control, release intervals Message-ID: <14975.6541.43230.433954@beluga.mojam.com> One thing that I think probably perturbs people is that there is no dot release of Python that is explicitly just a bug fix release. I rather like the odd-even versioning that the Linux kernel community uses where odd minor version numbers are development versions and even minor versions are stable versions. That way, if you're using the 2.2.15 kernel and 2.2.16 comes out you know it only contains bug fixes. On the other hand, when 2.3.1 is released, you know it's a development release. I'm not up on Linux kernel release timeframes, but the development kernels are publically available for quite awhile and receive a good deal of knocking around before being "pronounced" by the Linux BDFL and turned into a stable release. I don't see that currently happening in the Python community. I realize this would complicate maintenance of the Python CVS tree, but I think it may be necessary to give people a longer term sense of stability. Python 1.5.2 was released 4/13/99 and Python 2.0 on 10/16/00 (about 18 months between releases?). 2.1a1 came out 1/18/01 followed by 2.1a2 on 2/1/01 (all dates are from a cvs log of the toplevel README file). The 2.0 release did make some significant changes which have caused people some heartburn. To release 2.1 on 4/1/01 as PEP 226 suggests it will be with more language changes that could cause problems for existing code (weak refs and nested scopes get mentioned all the time) seems a bit fast, especially since the status of two relevant PEPs are "incomplete" and "draft", respectively. The relatively fast cycle time between creation of a PEP and incorporation of the feature into the language, plus the fact that the PEP concept is still relatively new to the Python community (are significant PEP changes announced to the newsgroups?), may be a strong contributing factor to the relatively small amount of feedback they receive and the relatively vocal response the corresponding language changes receive. Skip From sdm7g at virginia.edu Mon Feb 5 22:29:58 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Mon, 5 Feb 2001 16:29:58 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Mon, 5 Feb 2001, Tim Peters wrote: > [Fredrik Lundh] > > umm. what kind of imports are not case-sensitive today? > > fredrik.py and Fredrik.py, both on the path. On Windows it does or doesn't > blow up, depending on which one you import and which one is found first on > the path. On Unix it always works. On Unix it always works ... depending on the filesystem. ;-) > Imports on Windows aren't so much > case-sensitive as casenannying 
                              
                              . > From akuchlin at cnri.reston.va.us Mon Feb 5 22:45:57 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 5 Feb 2001 16:45:57 -0500 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <14975.6541.43230.433954@beluga.mojam.com>; from skip@mojam.com on Mon, Feb 05, 2001 at 03:22:21PM -0600 References: <14975.6541.43230.433954@beluga.mojam.com> Message-ID: <20010205164557.B990@thrak.cnri.reston.va.us> On Mon, Feb 05, 2001 at 03:22:21PM -0600, Skip Montanaro wrote: >heartburn. To release 2.1 on 4/1/01 as PEP 226 suggests it will be with >more language changes that could cause problems for existing code (weak refs >and nested scopes get mentioned all the time) seems a bit fast, especially >since the status of two relevant PEPs are "incomplete" and "draft", >respectively. Note that making new releases come out more quickly was one of GvR's goals. With frequent releases, much of the motivation for a Linux-style development/production split goes away; new Linux kernels take about 2 years to appear, and in that time people still need to get driver fixes, security updates, and so forth. There seem far fewer things worth fixing in a Python 2.0.1; the wiki contains one critical patch and 5 misc. ones. A more critical issue might be why people haven't adopted 2.0 yet; there seems little reason is there to continue using 1.5.2, yet I still see questions on the XML-SIG, for example, from people who haven't upgraded. Is it that Zope doesn't support it? Or that Red Hat and Debian don't include it? This needs fixing, or else we'll wind up with a community scattered among lots of different versions. (I hope someone is going to include all these issues in the agenda for "Collaborative Devel. Issues" on Developers' Day! They're certainly worth discussing...) --amk From jeremy at alum.mit.edu Mon Feb 5 22:53:00 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 5 Feb 2001 16:53:00 -0500 (EST) Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <20010205164557.B990@thrak.cnri.reston.va.us> References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <14975.8380.909630.483471@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "AMK" == Andrew Kuchling 
                              
                              writes: AMK> On Mon, Feb 05, 2001 at 03:22:21PM -0600, Skip Montanaro wrote: >> heartburn. To release 2.1 on 4/1/01 as PEP 226 suggests it will >> be with more language changes that could cause problems for >> existing code (weak refs and nested scopes get mentioned all the >> time) seems a bit fast, especially since the status of two >> relevant PEPs are "incomplete" and "draft", respectively. AMK> Note that making new releases come out more quickly was one of AMK> GvR's goals. With frequent releases, much of the motivation AMK> for a Linux-style development/production split goes away; new AMK> Linux kernels take about 2 years to appear, and in that time AMK> people still need to get driver fixes, security updates, and so AMK> forth. There seem far fewer things worth fixing in a Python AMK> 2.0.1; the wiki contains one critical patch and 5 misc. ones. AMK> A more critical issue might be why people haven't adopted 2.0 AMK> yet; there seems little reason is there to continue using AMK> 1.5.2, yet I still see questions on the XML-SIG, for example, AMK> from people who haven't upgraded. Is it that Zope doesn't AMK> support it? Or that Red Hat and Debian don't include it? This AMK> needs fixing, or else we'll wind up with a community scattered AMK> among lots of different versions. AMK> (I hope someone is going to include all these issues in the AMK> agenda for "Collaborative Devel. Issues" on Developers' Day! AMK> They're certainly worth discussing...) What is the agenda for this session on Developers' Day? Since we're the developers, it would be cool to know in advance. Same question for the Py3K session. It seems to be the right time for figuring out what we need to discuss at DD. Jeremy From akuchlin at cnri.reston.va.us Mon Feb 5 23:01:06 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 5 Feb 2001 17:01:06 -0500 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <14975.8380.909630.483471@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Feb 05, 2001 at 04:53:00PM -0500 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <14975.8380.909630.483471@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010205170106.D990@thrak.cnri.reston.va.us> On Mon, Feb 05, 2001 at 04:53:00PM -0500, Jeremy Hylton wrote: >What is the agenda for this session on Developers' Day? Since we're >the developers, it would be cool to know in advance. Does the session still exist? The brochure lists it as session D2-1, but that's now listed as "Reworking Python's Numeric Model". (I think the Catalog session is pretty useless, would happily change it to be the collab. devel. one, and would be willing to run the new session.) >Same question >for the Py3K session. It seems to be the right time for figuring out >what we need to discuss at DD. And I'm also thinking of putting together a "Python 3000 Considered Harmful" anti-presentation for the Py3K session... which is at the same time as the session I'm responsible for. 
                              
                              --amk From esr at thyrsus.com Mon Feb 5 23:03:40 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Mon, 5 Feb 2001 17:03:40 -0500 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010205164557.B990@thrak.cnri.reston.va.us>; from akuchlin@cnri.reston.va.us on Mon, Feb 05, 2001 at 04:45:57PM -0500 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <20010205170340.A3101@thyrsus.com> (Copying Michael Tiemann on this, as he can actually get Red Hat to move...) Andrew Kuchling 
                              
                              : > A more critical issue might be why people haven't adopted 2.0 yet; > there seems little reason is there to continue using 1.5.2, yet I > still see questions on the XML-SIG, for example, from people who > haven't upgraded. Is it that Zope doesn't support it? Or that Red > Hat and Debian don't include it? This needs fixing, or else we'll > wind up with a community scattered among lots of different versions. I've investigated this. The state of the Red Hat 7.1 beta seem to be that it will include both 2.0 and 1.5.2; there are separate python and python2 RPMs. This would be OK, but I don't know which version will be called by "/usr/bin/env python". Now hear this, Michael: *it should be 2.0*. It's OK to have 1.5.2 available as a compatibility hedge, but I haven't actually heard of any compatibility problems in code not specifically designed to probe for them. And there are several excellent reasons to push forward, beginning with garbage collection. Please make sure the default Python in 7.1 is Python 2. Among other things, this will significantly help CML2 adoption, which I know you are interested in. -- 
                              Eric S. Raymond No kingdom can be secured otherwise than by arming the people. The possession of arms is the distinction between a freeman and a slave. -- "Political Disquisitions", a British republican tract of 1774-1775 From mal at lemburg.com Mon Feb 5 23:07:44 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 23:07:44 +0100 Subject: [Python-Dev] PEPS, version control, release intervals References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <3A7F2430.302FF430@lemburg.com> Andrew Kuchling wrote: > > A more critical issue might be why people haven't adopted 2.0 yet; > there seems little reason is there to continue using 1.5.2, yet I > still see questions on the XML-SIG, for example, from people who > haven't upgraded. Is it that Zope doesn't support it? Or that Red > Hat and Debian don't include it? This needs fixing, or else we'll > wind up with a community scattered among lots of different versions. From sdm7g at virginia.edu Mon Feb 5 23:19:02 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Mon, 5 Feb 2001 17:19:02 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Sun, 4 Feb 2001, Tim Peters wrote: > Well, MacOSX-on-non-HFS+ *is* Unix, right? So that should take care of > itself (ya, right). I don't understand what Cygwin does; here from a Cygwin > bash shell session: > > tim at fluffy ~ > $ touch abc > > tim at fluffy ~ > $ touch ABC > > tim at fluffy ~ > $ ls > abc > > tim at fluffy ~ > $ wc AbC > 0 0 0 AbC > > tim at fluffy ~ > $ ls A* > ls: A*: No such file or directory > > tim at fluffy ~ > > So best I can tell, they're like Steven: working with a case-insensitive > filesystem but trying to make Python insist that it's not, and what basic > tools there do about case is seemingly random (wc doesn't care, shell > expansion does, touch doesn't, rm doesn't (not shown) -- maybe it's just > shell expansion that's trying to pretend this is Unix? oh ya, shell > expansion and Python import -- *that's* a natural pair 
                              
                              ). > Just for the record, I get exactly the same results on macosx as you did on Cygwin. The logic behind the seemingly random results is, I'm sure, the same logic behind my patch: accessing the file itself is case insensitive; but the directory entry (accessed by shell globbing) is case preserving. -- Steve Majewski From mal at lemburg.com Mon Feb 5 23:36:55 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 23:36:55 +0100 Subject: [Python-Dev] Iterators (PEP 234) (re: Sets BOF / for in dict) References: 
                              
                              Message-ID: <3A7F2B07.2D0D1460@lemburg.com> Ka-Ping Yee wrote: > > On Mon, 5 Feb 2001, M.-A. Lemburg wrote: > > Slices and dictionary enclose the two parts in brackets -- this > > places the colon into a visible context. for ... in ... : does > > not provide much of a context. > > For crying out loud! '\':' requires that you tokenize the string > before you know that the colon is part of the string. Triple-quotes > force you to tokenize carefully too. There is *nothing* that this > stay-away-from-colons argument buys you. > > For a human skimming over source code -- i repeat, the visual hint > is "colon at the END of a line". Oh well, perhaps you are right and we should call things like key:value association and be done with it. > > > Because there's no good answer for "what does iterator() return?" > > > in this design. (Trust me; i did think this through carefully.) > > > Try it. How would you implement the iterator() method? > > > > The .iterator() method would have to return an object which > > provides an iterator API (at C level to get the best performance). > > Okay, provide an example. Write this iterator() method in Python. > Now answer: how does 'for' know whether the thing to the right of > 'in' is an iterator or a sequence? Simple: have the for-loop test for a type slot and have it fallback to __getitem__ in case it doesn't find the slot API. > > For dictionaries, this object could carry the needed state > > (current position in the dictionary table) and use the PyDict_Next() > > for the internals. Matrices would have to carry along more state > > (one integer per dimension) and could access the internal > > matrix representation directly using C functions. > > This is already exactly what the PEP proposes for the implementation > of sq_iter. Sorry, Ping, I didn't know you have a PEP for iterators already. ...reading it... > > This would give us: speed, flexibility and extensibility > > which the syntax hacks cannot provide; > > The PEP is not just about syntax hacks. It's an iterator protocol. > It's clear that you haven't read it. > > *PLEASE* read the PEP before continuing to discuss it. I quote: > > | Rationale > | > | If all the parts of the proposal are included, this addresses many > | concerns in a consistent and flexible fashion. Among its chief > | virtues are the following three -- no, four -- no, five -- points: > | > | 1. It provides an extensible iterator interface. > | > | 2. It resolves the endless "i indexing sequence" debate. > | > | 3. It allows performance enhancements to dictionary iteration. > | > | 4. It allows one to provide an interface for just iteration > | without pretending to provide random access to elements. > | > | 5. It is backward-compatible with all existing user-defined > | classes and extension objects that emulate sequences and > | mappings, even mappings that only implement a subset of > | {__getitem__, keys, values, items}. > > I can take out the Monty Python jokes if you want. I can add more > jokes if that will make you read it. Just read it, i beg you. Done. Didn't know it exists, though (why isn't the PEP# in the subject line ?). Even after reading it, I still don't get the idea behind adding "Mapping Iterators" and "Sequence Iterators" when both of these are only special implementations of the single "Iterator" interface. Since the object can have multiple methods to construct iterators, all you need is *one* iterator API. You don't need a slot which returns an iterator object -- leave that decision to the programmer, e.g. you can have: for key in dict.xkeys(): for value in dict.xvalues(): for items in dict.xitems(): for entry in matrix.xrow(1): for entry in matrix.xcolumn(2): for entry in matrix.xdiag(): for i,element in sequence.xrange(): All of these method calls return special iterators for one specific task and all of them provide a slot which is callable without argument and yields the next element of the iteration. Iteration is terminated by raising an IndexError just like with __getitem__. Since for-loops can check for the type slot, they can use an optimized implementation which avoids the creation of temporary integer objects and leave the state-keeping to the iterator which can usually provide a C based storage for it with much better performance. Note that with this kind of interface, there is no need to add "Mapping Iterators" or "Sequence Iterators" as special cases, since these are easily implemented using the above iterators. > > e.g. how would you > > specify to iterate backwards over a sequence using that notation > > or diagonal for a matrix ? > > No differently from what you are suggesting, at the surface: > > for item in sequence.backwards(): > for item in matrix.diagonal(): > > The difference is that the thing on the right of 'in' is always > considered a sequence-like object. There is no ambiguity and > no magic rule for deciding when it's a sequence and when it's > an iterator. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From skip at mojam.com Mon Feb 5 23:42:04 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 16:42:04 -0600 (CST) Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <20010205164557.B990@thrak.cnri.reston.va.us> References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <14975.11324.787920.766932@beluga.mojam.com> amk> A more critical issue might be why people haven't adopted 2.0 yet; amk> there seems little reason is there to continue using 1.5.2/// For all the messing around I do on the CVS version, I still use 1.5.2 on my web servers precisely because I don't have the time or gumption to "fix" the code that needs to run. That's not just my code, but also the ZServer and DocumentTemplate code from Zope. Skip From skip at mojam.com Mon Feb 5 23:44:19 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 16:44:19 -0600 (CST) Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <20010205164557.B990@thrak.cnri.reston.va.us> References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <14975.11459.976381.345964@beluga.mojam.com> amk> Note that making new releases come out more quickly was one of amk> GvR's goals. With frequent releases, much of the motivation for a amk> Linux-style development/production split goes away; I don't think that's necessarily true. If a new release comes out every six months and always requires you to check for breakage of previously working code, what's the chance you're going to be anxious to upgrade? Pretty low I would think. Skip From tim.one at home.com Tue Feb 6 01:22:20 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 19:22:20 -0500 Subject: [Python-Dev] Funny! Message-ID: 
                              
                              Go to http://www.askjesus.org/ and enter www.python.org in the box. Grail is -- listen to Jesus when he's talking to you -- an extensible Tower of Babel browser writteneth entirely in the interpreted object-oriented programming babel Python. It runs upon Unix, and, to some extent, upon Windows and Macintosh. Grail is with GOD's help extended to support immaculately conceived protocols or file formats. oddly-enough-the-tabnanny-docs-weren't-altered-at-all-ly y'rs - tim From skip at mojam.com Tue Feb 6 01:57:27 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 18:57:27 -0600 (CST) Subject: [Python-Dev] test_minidom failing on linux Message-ID: <14975.19447.698806.586210@beluga.mojam.com> test_minidom failed on my linux system just now. I tried another cvs update but no files were updated. Did someone forget to check in a new expected output file? Skip From moshez at zadka.site.co.il Tue Feb 6 02:53:26 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Tue, 6 Feb 2001 03:53:26 +0200 (IST) Subject: [Python-Dev] Alternative to os.system that takes a list of strings? In-Reply-To: <01d001c08f81$ec4d83b0$0900a8c0@SPIFF> References: <01d001c08f81$ec4d83b0$0900a8c0@SPIFF>, <200102051430.JAA17890@w20-575-36.mit.edu> <200102051434.JAA31491@cj20424-a.reston1.va.home.com> Message-ID: <20010206015326.46228A841@darjeeling.zadka.site.co.il> On Mon, 5 Feb 2001, "Fredrik Lundh" 
                              
                              wrote: > > BTW, what do you mean by "upstream"? > > looks like freebsd lingo: the original maintainer of a > piece of software (outside the bsd universe). Also Debian lingo for same. -- Moshe Zadka 
                              
                              This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From moshez at zadka.site.co.il Tue Feb 6 03:04:05 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Tue, 6 Feb 2001 04:04:05 +0200 (IST) Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <20010206020405.58D03A840@darjeeling.zadka.site.co.il> On Mon, 05 Feb 2001, Andrew Kuchling 
                              
                              wrote: > One thing about the reaction to the 2.1 alphas is that many people > seem *surprised* by some of the changes, even though PEPs have been > written, discussed, and mentioned in python-dev summaries. Maybe the > PEPs and their status need to be given higher visibility; I'd suggest > sending a brief note of status changes (new draft PEPs, acceptance, > rejection) to comp.lang.python.announce. I'm +1 on that. c.l.p.a isn't really a high-traffic group, and this would add negligible traffic in any case. Probably more important then stuff I approve daily. > Also, I'm wondering if it's worth continuing the python-dev summaries, > because, while they get a bunch of hits on news sites such as Linux > Today and may be good PR, I'm not sure that they actually help Python > development. They're supposed to let people offer timely comments on > python-dev discussions while it's still early enough to do some good, > but that doesn't seem to happen; I don't see python-dev postings that > began with something like "The last summary mentioned you were talking > about X. I use X a lot, and here's what I think: ...". Is anything > much lost if the summaries cease? One note: if you're asking for lack of time, I can help: I'm doing the Python-URL! summaries for a few weeks now, and I've gotten some practice. FWIW, I think they are excellent. Maybe crosspost to c.l.py too, so it can get discussed on the group more easily? -- Moshe Zadka 
                              
                              This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From moshez at zadka.site.co.il Tue Feb 6 03:11:20 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Tue, 6 Feb 2001 04:11:20 +0200 (IST) Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: <20010205141139.K733@thrak.cnri.reston.va.us> References: <20010205141139.K733@thrak.cnri.reston.va.us>, 
                              
                              <3A7EF1A0.EDA4AD24@lemburg.com> Message-ID: <20010206021120.66A16A840@darjeeling.zadka.site.co.il> On Mon, 5 Feb 2001, Andrew Kuchling 
                              
                              wrote: > * Try doing some PR for 2.1. OK, no one is going to enjoy hearing this, and I know this has been hashed to death, but the major stumbling block for PR for 2.0 was GPL-compat. I know everyone is doing their best to resolve this problem, and my heart felt thanks to them for doing this thankless job. Mostly, PR for 2.1 consists of writing our code using the 2.1 wonderful constructs (os.spawnv, for example, which is now x-p). I know I'd do that more easily if I knew 'apt-get install python' would let people use my code. -- Moshe Zadka 
                              
                              This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From tim.one at home.com Tue Feb 6 03:26:26 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 21:26:26 -0500 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <20010205170106.D990@thrak.cnri.reston.va.us> Message-ID: 
                              
                              [resending because it never showed up in the Python-Dev archives, & this is my last decent chance to do email this week ] [Jeremy Hylton] > What is the agenda for this session on Developers' Day? Since we're > the developers, it would be cool to know in advance. [Andrew Kuchling] > Does the session still exist? The brochure lists it as session D2-1, > but that's now listed as "Reworking Python's Numeric Model". I think that's right. I "volunteered" to endure numeric complaints, as there are at least a dozen contentious proposals in that area (from rigid 754 support to extensible literal notation for, e.g., users who hate stuffing rationals or gmp numbers or fixed-point decimals in strings; we could fill a whole day without even mentioning what 1/2 does!). Then, since collaborative development ceased being a topic on Python-Dev (been a long time since somebody brought that up here, other than to gripe about the SourceForge bug-du-jour or that Guido *still* doesn't accept every proposal 
                              
                              ), the prospects for having an interesting session on that appeared dim. Maybe that was wrong; otoh, Jeremy just now failed to think of a relevant issue on his own 
                              
                              . > And I'm also thinking of putting together a "Python 3000 Considered > Harmful" anti-presentation for the Py3K session... which is at the > same time as the session I'm responsible for. 
                              
                              Don't tell anyone, but 2.1 *is* Python 3000 -- or as much of it as will be folded in for 2.1 <0.3 wink>. About people not moving to 2.0, the single specific reason I hear most often hinges on presumed lack of GPL compatibility. But then people worried about that *have* a specific reason stopping them. For everyone else, I know sysadmins who still refuse to move up from Perl 4. BTW, we recorded thousands of downloads of 2.0 betas at BeOpen.com, and indeed more than 10,000 of the Windows installer alone. Then their download stats broke. SF's have been broken for a long time. So while we have no idea how many people are downloading now, the idea that people stayed away from 2.0 in droves is wrong. And 2.0-specific examples are common on c.l.py now from lots of people too. only-developers-are-in-a-rush-ly y'rs - tim From fredrik at effbot.org Tue Feb 6 04:58:48 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 6 Feb 2001 04:58:48 +0100 Subject: [Python-Dev] PEP announcements, and summaries References: 
                              
                              <20010206020405.58D03A840@darjeeling.zadka.site.co.il> Message-ID: <00ce01c08ff1$1f03b1c0$e46940d5@hagrid> moshe wrote: > FWIW, I think they are excellent. agreed. > Maybe crosspost to c.l.py too, so it can get discussed > on the group more easily? +1 Cheers /F From nas at arctrix.com Tue Feb 6 05:56:12 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 5 Feb 2001 20:56:12 -0800 Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: <200102032110.QAA13074@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sat, Feb 03, 2001 at 04:10:56PM -0500 References: <14971.26729.54529.333522@beluga.mojam.com> 
                              
                              <14972.7656.829356.566021@beluga.mojam.com> <20010203092124.A30977@glacier.fnational.com> <200102032040.PAA04977@mercur.uphs.upenn.edu> <00c401c08e23$96b44510$e46940d5@hagrid> <200102032110.QAA13074@cj20424-a.reston1.va.home.com> Message-ID: <20010205205612.A7074@glacier.fnational.com> On Sat, Feb 03, 2001 at 04:10:56PM -0500, Guido van Rossum wrote: > Effbot wrote: > > why not just keep the old behaviour? > Agreed. Unless there's a GNU guideline somewhere. A few points: If typing make does not correctly rebuild the target then I consider it a bug with the makefile. Of course, this excludes things like upgrading the system between compiles. In that case, you should remove the config.cache file and re-run configure. Also, I'm uneasy about the makefile removing things it didn't create. I would be annoyed if I backed up a file using a .bak extension only to realize that "make clean" blew it away. Why does "clean" have to remove this stuff? Perhaps it would be useful if you explain the logic behind the old targets. Here is my rational: clean: Remove object files. They take up a bit of space. It will also force all .c files to be recompiled next time make is run. Remove compiled Python code as well. Maybe the interpreter has changed but the magic has not. clobber: Remove libraries as well. Maybe Setup or setup.py has been changed and I don't want some of the old shared libraries. distclean: Remove everything that might pollute a source distribution. Looking at this again I think the cleaning of configure stuff should be moved to clobber. OTOH, I have no problems with making the clean targets behave similarily to the ones in 2.0 if that's what people want. Neil From paulp at ActiveState.com Tue Feb 6 06:49:56 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Mon, 05 Feb 2001 21:49:56 -0800 Subject: [Python-Dev] Pre-PEP: Python Character Model Message-ID: <3A7F9084.509510B8@ActiveState.com> I went to a very interesting talk about internationalization by Tim Bray, one of the editors of the XML spec and a real expert on i18n. It inspired me to wrestle one more time with the architectural issues in Python that are preventing us from saying that it is a really internationalized language. Those geek cruises aren't just about sun, surf and sand. There's a pretty high level of intellectual give and take also! Email me for more info... Anyhow, we deferred many of these issues (probably out of exhaustion) the last time we talked about it but we cannot and should not do so forever. In particular, I do not think that we should add more features for working with Unicode (e.g. unichr) before thinking through the issues. ----- Abstract Many of the world's written languages have more than 255 characters. Therefore Python is out of date in its insistence that "basic strings" are lists of characters with ordinals between 0 and 255. Python's basic character type must allow at least enough digits for Eastern languages. Problem Description Python's western bias stems from a variety of issues. The first problem is that Python's native character type is an 8-bit character. You can see that it is an 8-bit character by trying to insert a value with an ordinal higher than 255. Python should allow for ordinal numbers up to at least the size of a single Eastern language such as Chinese or Japanese. Whenever a Python file object is "read", it returns one of these lists of 8-byte characters. The standard file object "read" method can never return a list of Chinese or Japanese characters. This is an unacceptable state of affairs in the 21st century. Goals 1. Python should have a single string type. It should support Eastern characters as well as it does European characters. Operationally speaking: type("") == type(chr(150)) == type(chr(1500)) == type(file.read()) 2. It should be easier and more efficient to encode and decode information being sent to and retrieved from devices. 3. It should remain possible to work with the byte-level representation. This is sometimes useful for for performance reasons. Definitions Character Set A character set is a mapping from integers to characters. Note that both integers and characters are abstractions. In other words, a decision to use a particular character set does not in any way mandate a particular implementation or representation for characters. In Python terms, a character set can be thought of as no more or less than a pair of functions: ord() and chr(). ASCII, for instance, is a pair of functions defined only for 0 through 127 and ISO Latin 1 is defined only for 0 through 255. Character sets typically also define a mapping from characters to names of those characters in some natural language (often English) and to a simple graphical representation that native language speakers would recognize. It is not possible to have a concept of "character" without having a character set. After all, characters must be chosen from some repertoire and there must be a mapping from characters to integers (defined by ord). Character Encoding A character encoding is a mechanism for representing characters in terms of bits. Character encodings are only relevant when information is passed from Python to some system that works with the characters in terms of representation rather than abstraction. Just as a Python programmer would not care about the representation of a long integer, they should not care about the representation of a string. Understanding the distinction between an abstract character and its bit level representation is essential to understanding this Python character model. A Python programmer does not need to know or care whether a long integer is represented as twos complement, ones complement or in terms of ASCII digits. Similarly a Python programmer does not need to know or care how characters are represented in memory. We might even change the representation over time to achieve higher performance. Universal Character Set There is only one standardized international character set that allows for mixed-language information. It is called the Universal Character Set and it is logically defined for characters 0 through 2^32 but practically is deployed for characters 0 through 2^16. The Universal Character Set is an international standard in the sense that it is standardized by ISO and has the force of law in international agreements. A popular subset of the Universal Character Set is called Unicode. The most popular subset of Unicode is called the "Unicode Basic Multilingual Plane (Unicode BMP)". The Unicode BMP has space for all of the world's major languages including Chinese, Korean, Japanese and Vietnamese. There are 2^16 characters in the Unicode BMP. The Unicode BMP subset of UCS is becoming a defacto standard on the Web. In any modern browser you can create an HTML or XML document with Ä­ and get back a rendered version of Unicode character 301. In other words, Unicode is becoming the defato character set for the Internet in addition to being the officially mandated character set for international commerce. In addition to defining ord() and chr(), Unicode provides a database of information about characters. Each character has an english language name, a classification (letter, number, etc.) a "demonstration" glyph and so forth. The Unicode Contraversy Unicode is not entirely uncontroversial. In particular there are Japanese speakers who dislike the way Unicode merges characters from various languages that were considered "the same" by the experts that defined the specification. Nevertheless Unicode is in used as the character set for important Japanese software such as the two most popular word processors, Ichitaro and Microsoft Word. Other programming languages have also moved to use Unicode as the basic character set instead of ASCII or ISO Latin 1. From memory, I believe that this is the case for: Java Perl JavaScript Visual Basic TCL XML is also Unicode based. Note that the difference between all of these languages and Python is that Unicode is the *basic* character type. Even when you type ASCII literals, they are immediately converted to Unicode. It is the author's belief this "running code" is evidence of Unicode's practical applicability. Arguments against it seem more rooted in theory than in practical problems. On the other hand, this belief is informed by those who have done heavy work with Asian characters and not based on my own direct experience. Python Character Set As discussed before, Python's native character set happens to consist of exactly 255 characters. If we increase the size of Python's character set, no existing code would break and there would be no cost in functionality. Given that Unicode is a standard character set and it is richer than that of Python's, Python should move to that character set. Once Python moves to that character set it will no longer be necessary to have a distinction between "Unicode string" and "regular string." This means that Unicode literals and escape codes can also be merged with ordinary literals and escape codes. unichr can be merged with chr. Character Strings and Byte Arrays Two of the most common constructs in computer science are strings of characters and strings of bytes. A string of bytes can be represented as a string of characters between 0 and 255. Therefore the only reason to have a distinction between Unicode strings and byte strings is for implementation simplicity and performance purposes. This distinction should only be made visible to the average Python programmer in rare circumstances. Advanced Python programmers will sometimes care about true "byte strings". They will sometimes want to build and parse information according to its representation instead of its abstract form. This should be done with byte arrays. It should be possible to read bytes from and write bytes to arrays. It should also be possible to use regular expressions on byte arrays. Character Encodings for I/O Information is typically read from devices such as file systems and network cards one byte at a time. Unicode BMP characters can have values up to 2^16 (or even higher, if you include all of UCS). There is a fundamental disconnect there. Each character cannot be represented as a single byte anymore. To solve this problem, there are several "encodings" for large characters that describe how to represent them as series of bytes. Unfortunately, there is not one, single, dominant encoding. There are at least a dozen popular ones including ASCII (which supports only 0-127), ISO Latin 1 (which supports only 0-255), others in the ISO "extended ASCII" family (which support different European scripts), UTF-8 (used heavily in C programs and on Unix), UTF-16 (preferred by Java and Windows), Shift-JIS (preferred in Japan) and so forth. This means that the only safe way to read data from a file into Python strings is to specify the encoding explicitly. Python's current assumption is that each byte translates into a character of the same ordinal. This is only true for "ISO Latin 1". Python should require the user to specify this explicitly instead. Any code that does I/O should be changed to require the user to specify the encoding that the I/O should use. It is the opinion of the author that there should be no default encoding at all. If you want to read ASCII text, you should specify ASCII explicitly. If you want to read ISO Latin 1, you should specify it explicitly. Once data is read into Python objects the original encoding is irrelevant. This is similar to reading an integer from a binary file, an ASCII file or a packed decimal file. The original bits and bytes representation of the integer is disconnected from the abstract representation of the integer object. Proposed I/O API This encoding could be chosen at various levels. In some applications it may make sense to specify the encoding on every read or write as an extra argument to the read and write methods. In most applications it makes more sense to attach that information to the file object as an attribute and have the read and write methods default the encoding to the property value. This attribute value could be initially set as an extra argument to the "open" function. Here is some Python code demonstrating a proposed API: fileobj = fopen("foo", "r", "ASCII") # only accepts values < 128 fileobj2 = fopen("bar", "r", "ISO Latin 1") # byte-values "as is" fileobj3 = fopen("baz", "r", "UTF-8") fileobj2.encoding = "UTF-16" # changed my mind! data = fileobj2.read(1024, "UTF-8" ) # changed my mind again For efficiency, it should also be possible to read raw bytes into a memory buffer without doing any interpretation: moredata = fileobj2.readbytes(1024) This will generate a byte array, not a character string. This is logically equivalent to reading the file as "ISO Latin 1" (which happens to map bytes to characters with the same ordinals) and generating a byte array by copying characters to bytes but it is much more efficient. Python File Encoding It should be possible to create Python files in any of the common encodings that are backwards compatible with ASCII. This includes ASCII itself, all language-specific "extended ASCII" variants (e.g. ISO Latin 1), Shift-JIS and UTF-8 which can actually encode any UCS character value. The precise variant of "super-ASCII" must be declared with a specialized comment that precedes any other lines other than the shebang line if present. It has a syntax like this: #?encoding="UTF-8" #?encoding="ISO-8859-1" ... #?encoding="ISO-8859-9" #?encoding="Shift_JIS" For now, this is the complete list of legal encodings. Others may be added in the future. Python files which use non-ASCII characters without defining an encoding should be immediately deprecated and made illegal in some future version of Python. C APIs The only time representation matters is when data is being moved from Python's internal model to something outside of Python's control or vice versa. Reading and writing from a device is a special case discussed above. Sending information from Python to C code is also an issue. Python already has a rule that allows the automatic conversion of characters up to 255 into their C equivalents. Once the Python character type is expanded, characters outside of that range should trigger an exception (just as converting a large long integer to a C int triggers an exception). Some might claim it is inappropriate to presume that the character-for- byte mapping is the correct "encoding" for information passing from Python to C. It is best not to think of it as an encoding. It is merely the most straightforward mapping from a Python type to a C type. In addition to being straightforward, I claim it is the best thing for several reasons: * It is what Python already does with string objects (but not Unicode objects). * Once I/O is handled "properly", (see above) it should be extremely rare to have characters in strings above 128 that mean anything OTHER than character values. Binary data should go into byte arrays. * It preserves the length of the string so that the length C sees is the same as the length Python sees. * It does not require us to make an arbitrary choice of UTF-8 versus UTF-16. * It means that C extensions can be internationalized by switching from C's char type to a wchar_t and switching from the string format code to the Unicode format code. Python's built-in modules should migrate from char to wchar_t (aka Py_UNICODE) over time. That is, more and more functions should support characters greater than 255 over time. Rough Implementation Requirements Combine String and Unicode Types: The StringType and UnicodeType objects should be aliases for the same object. All PyString_* and PyUnicode_* functions should work with objects of this type. Remove Unicode String Literals Ordinary string literals should allow large character escape codes and generate Unicode string objects. Unicode objects should "repr" themselves as Python string objects. Unicode string literals should be deprecated. Generalize C-level Unicode conversion The format string "S" and the PyString_AsString functions should accept Unicode values and convert them to character arrays by converting each value to its equivalent byte-value. Values greater than 255 should generate an exception. New function: fopen fopen should be like Python's current open function except that it should allow and require an encoding parameter. The file objects returned by it should be encoding aware. fopen should be considered a replacement for open. open should eventually be deprecated. Add byte arrays The regular expression library should be generalized to handle byte arrays without converting them to Python strings. This will allow those who need to work with bytes to do so more efficiently. In general, it should be possible to use byte arrays where-ever it is possible to use strings. Byte arrays could be thought of as a special kind of "limited but efficient" string. Arguably we could go so far as to call them "byte strings" and reuse Python's current string implementation. The primary differences would be in their "repr", "type" and literal syntax. In a sense we would have kept the existing distinction between Unicode strings and 8-bit strings but made Unicode the "default" and provided 8-bit strings as an efficient alternative. Appendix: Using Non-Unicode character sets Let's presume that a linguistics researcher objected to the unification of Han characters in Unicode and wanted to invent a character set that included separate characters for all Chinese, Japanese and Korean character sets. Perhaps they also want to support some non-standard character set like Klingon. Klingon is actually scheduled to become part of Unicode eventually but let's presume it wasn't. This section will demonstrate that this researcher is no worse off under the new system than they were under historical Python. Adopting Unicode as a standard has no down-side for someone in this situation. They have several options under the new system: 1. Ignore Unicode Read in the bytes using the encoding "RAW" which would mean that each byte would be translated into a character between 0 and 255. It would be a synonym for ISO Latin 1. Now you can process the data using exactly the same Python code that you would have used in Python 1.5 through Python 2.0. The only difference is that the in-memory representation of the data MIGHT be less space efficient because Unicode characters MIGHT be implemented internally as 16 or 32 bit integers. This solution is the simplest and easiest to code. 2. Use Byte Arrays As dicussed earlier, a byte array is like a string where the characters are restricted to characters between 0 and 255. The only virtues of byte arrays are that they enforce this rule and they can be implemented in a more memory-efficient manner. According to the proposal, it should be possible to load data into a byte array (or "byte string") using the "readbytes" method. This solution is the most efficient. 3. Use Unicode's Private Use Area (PUA) Unicode is an extensible standard. There are certain character codes reserved for private use between consenting parties. You could map characters like Klingon or certain Korean ideographs into the private use area. Obviously the Unicode character database would not have meaningful information about these characters and rendering systems would not know how to render them. But this situation is no worse than in today's Python. There is no character database for arbitrary character sets and there is no automatic way to render them. One limitation to this issue is that the Private Use Area can only handle so many characters. The BMP PUA can hold thousands and if we step up to "full" Unicode support we have room for hundreds of thousands. This solution gets the maximum benefit from Unicode for the characters that are defined by Unicode without losing the ability to refer to characters outside of Unicode. 4. Use A Higher Level Encoding You could wrap Korean characters in 
                              
                              ...
                               tags. You could describe a characters as \KLINGON-KAHK (i.e. 13 Unicode characters). You could use a special Unicode character as an "escape flag" to say that the next character should be interpreted specially. This solution is the most self-descriptive and extensible. In summary, expanding Python's character type to support Unicode characters does not restrict even the most estoric, Unicode-hostile types of text processing. Therefore there is no basis for objecting to Unicode as some form of restriction. Those who need to use another logial character set have as much ability to do so as they always have. Conclusion Python needs to support international characters. The "ASCII" of internationalized characters is Unicode. Most other languages have moved or are moving their basic character and string types to support Unicode. Python should also. From moshez at zadka.site.co.il Tue Feb 6 09:48:15 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Tue, 6 Feb 2001 10:48:15 +0200 (IST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010205170340.A3101@thyrsus.com> References: <20010205170340.A3101@thyrsus.com>, <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <20010206084815.E63E5A840@darjeeling.zadka.site.co.il> On Mon, 5 Feb 2001, "Eric S. Raymond" 
                              
                              wrote: > (Copying Michael Tiemann on this, as he can actually get Red Hat to move...) Copying to debian-python, since it's an important issue there too... > I've investigated this. The state of the Red Hat 7.1 beta seem to be > that it will include both 2.0 and 1.5.2; there are separate python and > python2 RPMs. This would be OK, but I don't know which version will be > called by "/usr/bin/env python". That's how woody works now, and the binaries are called python and python2. Note that they are not managed by the alternatives mechanism -- Joey Hess explained the bad experience perl had with that. I think it's thought of as a temporary issue, and the long-term solution would be to move to Python 2.1. Not sure what all the packages who install in /usr/lib/python1.5 are going to do about it. I'm prepared to adopt htmlgen and python-imaging to convert them if it's needed. -- Moshe Zadka 
                              
                              This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From ping at lfw.org Tue Feb 6 10:11:31 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 01:11:31 -0800 (PST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <042701c08fb6$fd382970$e46940d5@hagrid> Message-ID: 
                              
                              On Mon, 5 Feb 2001, Fredrik Lundh wrote: > yeah, don't forget unpacking assignments: > > assert len(dict) == 3 > { k1:v1, k2:v2, k3:v3 } = dict I think this is a total non-issue for the following reasons: 1. Recall the original philosophy behind the list/tuple split. Lists and dicts are usually variable-length homogeneous structures, and therefore it makes sense for them to be mutable. Tuples are usually fixed-length heterogeneous structures, and so it makes sense for them to be immutable and unpackable. 2. In all the Python programs i've ever seen or written, i've never known or expected a dictionary to have a particular fixed length. 3. Since the items come back in random order, there's no point in binding individual ones to individual variables. It's only ever useful to iterate over the key/value pairs. In short, i can't see how anyone would ever want to do this. (Sorry for being the straight man, if you were in fact joking...) -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From mal at lemburg.com Tue Feb 6 11:49:00 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 11:49:00 +0100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> Message-ID: <3A7FD69C.1708339C@lemburg.com> [pre-PEP] You have a lot of good points in there (also some inaccuracies) and I agree that Python should move to using Unicode for text data and arrays for binary data. Some things you may be missing though is that Python already has support for a few features you mention, e.g. codecs.open() provide more or less what you have in mind with fopen() and the compiler can already unify Unicode and string literals using the -U command line option. What you don't talk about in the PEP is that Python's stdlib isn't even Unicode aware yet, and whatever unification steps we take, this project will have to preceed it. The problem with making the stdlib Unicode aware is that of deciding which parts deal with text data or binary data -- the code sometimes makes assumptions about the nature of the data and at other times it simply doesn't care. In this light I think you ought to focus Python 3k with your PEP. This will also enable better merging techniques due to the lifting of the type/class difference. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From ping at lfw.org Tue Feb 6 12:04:34 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 03:04:34 -0800 (PST) Subject: [Python-Dev] Iterators (PEP 234) In-Reply-To: <3A7F2B07.2D0D1460@lemburg.com> Message-ID: 
                              
                              On 5 Feb 2001, M.-A. Lemburg wrote: > > > The .iterator() method would have to return an object which > > > provides an iterator API (at C level to get the best performance). > > > > Okay, provide an example. Write this iterator() method in Python. > > Now answer: how does 'for' know whether the thing to the right of > > 'in' is an iterator or a sequence? > > Simple: have the for-loop test for a type slot and have > it fallback to __getitem__ in case it doesn't find the slot API. For the third time: write an example, please. It will help a lot. > Sorry, Ping, I didn't know you have a PEP for iterators already. I posted it on this very boutique (i mean, mailing list) a week ago and messages have been going back and forth on its thread since then. On 31 Jan 2001, Ka-Ping Yee wrote: | Okay, i have written a draft PEP that tries to combine the | "elt in dict", custom iterator, and "for k:v" issues into a | coherent proposal. Have a look: | | http://www.lfw.org/python/pep-iterators.txt | http://www.lfw.org/python/pep-iterators.html Okay. I apologize for my impatient tone, as it comes from the expectation that anyone would have read the document before trying to discuss it. I am very happy to get *new* information, the discovery of new errors in my thinking, better and interesting arguments; it's just that it's exasperating to see arguments repeated that were already made, or objections raised that were already carefully thought out and addressed. From now on, i'll stop resisting the urge to paste the text of proposals inline (instead of politely posting just URLs) so you won't miss them. > Done. Didn't know it exists, though (why isn't the PEP# > in the subject line ?). It didn't have a number at the time i posted it. Thank you for updating the subject line. > Since the object can have multiple methods to construct > iterators, all you need is *one* iterator API. You don't > need a slot which returns an iterator object -- leave > that decision to the programmer, e.g. you can have: > > for key in dict.xkeys(): > for value in dict.xvalues(): > for items in dict.xitems(): Three points: 1. We have syntactic support for mapping creation and lookup, and syntactic support for mapping iteration should mirror it. 2. IMHO for key:value in dict: is much easier to read and explain than for (key, value) in dict.xitems(): (Greg? Could you test this claim with a survey question?) To the newcomer, the former is easy to understand at a surface level. The latter exposes the implementation (an implementation that is still there in PEP 234, but that the programmer only has to worry about if they are going deeper and writing custom iteration behaviour). This separates the work of learning into two small, digestible pieces. 3. Furthermore, this still doesn't solve the backward-compatibility problem that PEP 234 takes great care to address! If you write your for-loops for (key, value) in dict.xitems(): then you are screwed if you try to replace dict with any kind of user-implemented dictionary-like replacement (since you'd have to go back and implement the xitems() method on everything). If, in order to maintain compatibility with the existing de-facto dictionary interface, you write your for-loops for (key, value) in dict.items(): then now you are screwed if dict is a built-in dictionary, since items() is supposed to construct a list, not an iterator. > for entry in matrix.xrow(1): > for entry in matrix.xcolumn(2): > for entry in matrix.xdiag(): These are fine, since matrices are not core data types with syntactic support or a de-facto emulation protocol. > for i,element in sequence.xrange(): This is just as bad as the xitems() issue above -- probably worse -- since nobody implements xrange() on sequence-like objects, so now you've broken compatibility with all of those. We want this feature to smoothly extend and work with existing objects with a minimum of rewriting, ideally none. PEP 234 achieves this ideal. > Since for-loops can check for the type slot, they can use an > optimized implementation which avoids the creation of > temporary integer objects and leave the state-keeping to the > iterator which can usually provide a C based storage for it with > much better performance. This statement, i believe, is orthogonal to both proposals. > Note that with this kind of interface, there is no need to > add "Mapping Iterators" or "Sequence Iterators" as special > cases, since these are easily implemented using the above > iterators. I think this really just comes down to one key difference between our points of view here. Correct me if you disagree: You seem to be suggesting that we should only consider a protocol for sequences, whereas PEP 234 talks about both sequences and mappings. I argue that consideration for mappings is worthwhile because: 1. Dictionaries are a built-in type with syntactic and core implementation support. 2. Iteration over dictionaries is very common and should be spelled in an easily understood fashion. 3. Both sequence and mapping protocols are formalized in the core (with PySequenceMethods and PyMappingMethods). 4. Both sequence and mapping protocols are documented and used in Python (__getitem__, keys, values, etc.). 5. There are many, many sequence-like and mapping-like objects out there, implemented both in Python and in C, which adhere to these protocols. (There is also the not-insignificant side benefit of finally having a decent way to get the indices while you're iterating over a sequence, which i've wanted fairly often.) -- ?!ng From ping at lfw.org Tue Feb 6 12:32:27 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 03:32:27 -0800 (PST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <200102052022.PAA05449@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              On Mon, 5 Feb 2001, Guido van Rossum wrote: > > [Ping] > > I think your survey shows that the PEP made the right choices. > > That is, it supports the position that if 'for key:value' is > > supported, then 'for key:' and 'for :value' should be supported, > > but 'for x in dict:' should not. It also shows that 'for index:' > > should be supported on sequences, which the PEP suggests. > > But then we should review the wisdom of using "if x in dict" as a > shortcut for "if dict.has_key(x)" again. Everything is tied together! Okay. Here's the philosophy; i'll describe my thinking more explicitly. Presumably we can all agree that if you ask to iterate over things "in" a sequence, you clearly want the items in the sequence, not their integer indices. You care about the data *you* put in the container. In the case of a list, you care about the items more than these additional integers that got supplied as a result of using an ordered data structure. So the meaning of for thing in sequence: is pretty clear. The meaning of for thing in mapping: is less clear, since both the keys and the values are interesting data to you. If i ask you to "get me all the things in the dictionary", it's not so obvious whether you should get me a list of just the words, just the definitions, or both (probably both, i suppose). But, if i ask you to "find 'aardvark' in the dictionary" or i ask you "is 'aardvark' in the dictionary?" it's completely obvious what i mean. "if key in dict:" makes sense both by this analogy to common use, and by an argument from efficiency (even the most rudimentary understanding of how a dictionary works is enough to see why we look up keys rather than values). In fact, we *call* it a dictionary because it works like a real dictionary: it's designed for data lookup in one direction, from key to value. "if thing in container" is about *finding* something specific. "for thing in container" is about getting everything out. Now, i know this isn't the strongest argument in the world, and i can see the potential objection that the two aren't consistent, but i think it's a very small thing that only has to be explained once, and then is easy to remember and understand. I consider this little difference less of an issue than the hasattr/has_key inconsistency that it will largely replace. We make expectations clear: for item in sequence: continues to mean, "i expect a sequence", exactly as it does now. When not given a sequence, the 'for' loop complains. Nothing could break, as the interpretation of this loop is unchanged. These three forms: for k:v in anycontainer: for k: in anycontainer: for :v in anycontainer: mean: "i am expecting any indexable thing, where ctr[k] = v". As far as the syntax goes, that's all there is to it: for item in sequence: # only on sequences for k:v in anycontainer: # get keys and values on anything for k: in anycontainer: # just keys for :v in anycontainer: # just values -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From mal at lemburg.com Tue Feb 6 12:54:50 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 12:54:50 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: 
                              
                              Message-ID: <3A7FE60A.261CEE6A@lemburg.com> Ka-Ping Yee wrote: > > On 5 Feb 2001, M.-A. Lemburg wrote: > > > > The .iterator() method would have to return an object which > > > > provides an iterator API (at C level to get the best performance). > > > > > > Okay, provide an example. Write this iterator() method in Python. > > > Now answer: how does 'for' know whether the thing to the right of > > > 'in' is an iterator or a sequence? > > > > Simple: have the for-loop test for a type slot and have > > it fallback to __getitem__ in case it doesn't find the slot API. > > For the third time: write an example, please. It will help a lot. Ping, what do you need an example for ? The above sentence says it all: for x in obj: ... This will work as follows: 1. if obj exposes the iteration slot, say tp_nextitem, the for loop will call this slot without argument and assign the returned object to x 2. if obj does not expose tp_nextitem, then the for loop will construct an integer starting at 0 and pass this to the sq_item slot or __getitem__ method and assign the returned value to x; the integer is then replaced with an incremented integer 3. both techniques work until the slot or method in question returns an IndexError exception The current implementation doesn't have 1. This is the only addition it takes to get iterators to work together well with the for-loop -- there are no backward compatibility issues here, because the tp_nextitem slot will be a new one. Since the for-loop can avoid creating temporary integers, iterations will generally run a lot faster than before. Also, iterators have access to the object's internal representation, so data access is also faster. > > Sorry, Ping, I didn't know you have a PEP for iterators already. > > I posted it on this very boutique (i mean, mailing list) a week ago > and messages have been going back and forth on its thread since then. > > On 31 Jan 2001, Ka-Ping Yee wrote: > | Okay, i have written a draft PEP that tries to combine the > | "elt in dict", custom iterator, and "for k:v" issues into a > | coherent proposal. Have a look: > | > | http://www.lfw.org/python/pep-iterators.txt > | http://www.lfw.org/python/pep-iterators.html > > Okay. I apologize for my impatient tone, as it comes from the > expectation that anyone would have read the document before trying > to discuss it. I am very happy to get *new* information, the > discovery of new errors in my thinking, better and interesting > arguments; it's just that it's exasperating to see arguments > repeated that were already made, or objections raised that were > already carefully thought out and addressed. From now on, i'll > stop resisting the urge to paste the text of proposals inline > (instead of politely posting just URLs) so you won't miss them. I must have missed those postings... don't have time to read all of python-dev anymore :-( > > Done. Didn't know it exists, though (why isn't the PEP# > > in the subject line ?). > > It didn't have a number at the time i posted it. Thank you > for updating the subject line. > > > Since the object can have multiple methods to construct > > iterators, all you need is *one* iterator API. You don't > > need a slot which returns an iterator object -- leave > > that decision to the programmer, e.g. you can have: > > > > for key in dict.xkeys(): > > for value in dict.xvalues(): > > for items in dict.xitems(): > > Three points: > > 1. We have syntactic support for mapping creation and lookup, > and syntactic support for mapping iteration should mirror it. > > 2. IMHO > > for key:value in dict: > > is much easier to read and explain than > > for (key, value) in dict.xitems(): > > (Greg? Could you test this claim with a survey question?) > > To the newcomer, the former is easy to understand at a surface > level. The latter exposes the implementation (an implementation > that is still there in PEP 234, but that the programmer only has > to worry about if they are going deeper and writing custom > iteration behaviour). This separates the work of learning into > two small, digestible pieces. Tuples are well-known basic Python types. Why should (key,value) be any harder to understand than key:value. What would you tell a newbie that writes: for key:value in sequence: .... where sequence is a list of tuples and finds that this doesn't work ? Besides, the items() method has been around for ages, so switching from .items() to .xitems() in programs will be just as easy as switching from range() to xrange(). I am -0 on the key:value thingie. If you want it as a way to construct or split associations, fine. But it is really not necessary to be able to iterate over dictionaries. > 3. Furthermore, this still doesn't solve the backward-compatibility > problem that PEP 234 takes great care to address! If you write > your for-loops > > for (key, value) in dict.xitems(): > > then you are screwed if you try to replace dict with any kind of > user-implemented dictionary-like replacement (since you'd have to > go back and implement the xitems() method on everything). Why is that ? You'd just have to add .xitems() to UserDict and be done with it. This is how we have added new dictionary methods all along. I don't see your point here. Sure, if you want to use a new feature you will have to think about whether it can be used with your data-types. What you are trying to do here is maintain forward compatibility at the cost of making iteration much more complicated than it really is. > If, in order to maintain compatibility with the existing de-facto > dictionary interface, you write your for-loops > > for (key, value) in dict.items(): > > then now you are screwed if dict is a built-in dictionary, since > items() is supposed to construct a list, not an iterator. I'm not breaking backward compatibility -- the above will still work like it has before since lists don't have the tp_nextitem slot. > > for entry in matrix.xrow(1): > > for entry in matrix.xcolumn(2): > > for entry in matrix.xdiag(): > > These are fine, since matrices are not core data types with > syntactic support or a de-facto emulation protocol. > > > for i,element in sequence.xrange(): > > This is just as bad as the xitems() issue above -- probably worse -- > since nobody implements xrange() on sequence-like objects, so now > you've broken compatibility with all of those. > > We want this feature to smoothly extend and work with existing objects > with a minimum of rewriting, ideally none. PEP 234 achieves this ideal. Again, you are trying to achieve forward compatibility. If people want better performance, than they will have to add new functionality to their types -- one way or another. > > Since for-loops can check for the type slot, they can use an > > optimized implementation which avoids the creation of > > temporary integer objects and leave the state-keeping to the > > iterator which can usually provide a C based storage for it with > > much better performance. > > This statement, i believe, is orthogonal to both proposals. > > > Note that with this kind of interface, there is no need to > > add "Mapping Iterators" or "Sequence Iterators" as special > > cases, since these are easily implemented using the above > > iterators. > > I think this really just comes down to one key difference > between our points of view here. Correct me if you disagree: > > You seem to be suggesting that we should only consider a > protocol for sequences, whereas PEP 234 talks about both > sequences and mappings. No. I'm suggesting to add a low-level "give me the next item in the bag" and move the "how to get the next item" logic into an iterator object. This will still allow you to iterate over sequences and mappings, so I don't understand why you keep argueing for adding new syntax and slots to be able to iterate over dictionaries. > I argue that consideration for mappings is worthwhile because: > > 1. Dictionaries are a built-in type with syntactic and > core implementation support. > > 2. Iteration over dictionaries is very common and should > be spelled in an easily understood fashion. > > 3. Both sequence and mapping protocols are formalized in > the core (with PySequenceMethods and PyMappingMethods). > > 4. Both sequence and mapping protocols are documented and > used in Python (__getitem__, keys, values, etc.). > > 5. There are many, many sequence-like and mapping-like > objects out there, implemented both in Python and in C, > which adhere to these protocols. > > (There is also the not-insignificant side benefit of finally > having a decent way to get the indices while you're iterating > over a sequence, which i've wanted fairly often.) Agreed. I'd suggest to implement generic iterators which implements your suggestions and put them into the builins or a special iterator module... from iterators import xitems, xkeys, xvalues for key, value in xitems(dict): for key in xkeys(dict): for value in xvalues(dict): Other objects can then still have their own iterators by exposing special methods which construct special iterators. The for-loop will continue to work as always and happily accept __getitem__ compatible or tp_nextitem compatible objects as right-hand argument. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thomas at xs4all.net Tue Feb 6 13:11:42 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 13:11:42 +0100 Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: <200102051937.OAA01402@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 05, 2001 at 02:37:28PM -0500 References: 
                              
                              <3A7EF1A0.EDA4AD24@lemburg.com> <20010205141139.K733@thrak.cnri.reston.va.us> <200102051937.OAA01402@cj20424-a.reston1.va.home.com> Message-ID: <20010206131142.B9551@xs4all.nl> On Mon, Feb 05, 2001 at 02:37:28PM -0500, Guido van Rossum wrote: > (Hmm, I wonder if we could run this on starship.python.net instead? > That machine probably has more spare cycles.) Hmm.... eggs... basket... spam... ham... Given starships's track record I'd hesitate before running it on that :-) But then, 5 years of system administration has made me a highly superstitious person. I-still-boot-old-SCSI-tape-libraries-with-dead-chickens-in-reach-ly y'rs -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From thomas at xs4all.net Tue Feb 6 13:17:31 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 13:17:31 +0100 Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: 
                              
                              ; from akuchlin@mems-exchange.org on Mon, Feb 05, 2001 at 12:32:31PM -0500 References: 
                              
                              Message-ID: <20010206131731.C9551@xs4all.nl> On Mon, Feb 05, 2001 at 12:32:31PM -0500, Andrew Kuchling wrote: > One thing about the reaction to the 2.1 alphas is that many people > seem *surprised* by some of the changes, even though PEPs have been > written, discussed, and mentioned in python-dev summaries. Maybe the > PEPs and their status need to be given higher visibility; I'd suggest > sending a brief note of status changes (new draft PEPs, acceptance, > rejection) to comp.lang.python.announce. Or, (wait, wait) maybe, (don't shoot me) we should change the python-dev construct (nono, wait, wait!) - that is, instead of it being a write-only list with readable archives, have it be a list completely open for subscription, but with post access to a limited number of people (the current subscribers.) I know of at least two people who want to read python-dev, but not by starting up netscape every day. (One of them already tried subscribing to python-dev once ;) Or perhaps just digests, though I don't really see the benifit of that (or of the current approach, really.) It's just much easier to keep up and comment on features if it arrives in your mailbox every day. (Besides, it would prompt Barry to write easy ways to manage such list of posters, which is slightly lacking in Mailman right now 
                              
                              
                              ) Ok-*now*-you-can-shoot-me-ly y'rs -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From ping at lfw.org Tue Feb 6 13:25:58 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 04:25:58 -0800 (PST) Subject: [Python-Dev] Iterators (PEP 234) In-Reply-To: <3A7FE60A.261CEE6A@lemburg.com> Message-ID: 
                              
                              On Tue, 6 Feb 2001, M.-A. Lemburg wrote: > > For the third time: write an example, please. It will help a lot. > > Ping, what do you need an example for ? The above sentence says > it all: *sigh* I give up. I'm not going to ask again. Real examples are a good idea when considering any proposal. (a) When you do a real example, you usually discover mistakes or things you didn't think of in your design. (b) We can compare it directly to other examples to see how easy or hard it is to write and understand code that uses the new protocol. (c) We can come up with interesting cases in practice to see if there are limitations in any proposal. Now that you have a proposal in slightly more detail, a few missing pieces are evident. How would you implement a *Python* class that supports iteration? For instance, write something that has the effect of the FileLines class in PEP 234. How would you implement an object that can be iterated over more than once, at the same time or at different times? It's not clear to me how the single tp_nextitem slot can handle that. > Since the for-loop can avoid creating temporary integers, > iterations will generally run a lot faster than before. Also, > iterators have access to the object's internal representation, > so data access is also faster. Again, completely orthogonal to both proposals. Regardless of the protocol, if you're implementing the iterator in C, you can use raw integers and internal access to make it fast. > > 2. IMHO > > > > for key:value in dict: > > > > is much easier to read and explain than > > > > for (key, value) in dict.xitems(): [...] > Tuples are well-known basic Python types. Why should > (key,value) be any harder to understand than key:value. It's mainly the business of calling the method and rearranging the data that i'm concerned about. Example 1: dict = {1: 2, 3: 4} for (key, value) in dict.items(): Explanation: The "items" method on the dict converts {1: 2, 3: 4} into a list of 2-tuples, [(1, 2), (3, 4)]. Then (key, value) is matched against each item of this list, and the two parts of each tuple are unpacked. Example 2: dict = {1: 2, 3: 4} for key:value in dict: Explanation: The "for" loop iterates over the key:value pairs in the dictionary, which you can see are 1:2 and 3:4. > What would you tell a newbie that writes: > > for key:value in sequence: > .... > > where sequence is a list of tuples and finds that this doesn't > work ? "key:value doesn't look like a tuple, does it?" > Besides, the items() method has been around for ages, so switching > from .items() to .xitems() in programs will be just as easy as > switching from range() to xrange(). It's not the same. xrange() is a built-in function that you call; xitems() is a method that you have to *implement*. > > for (key, value) in dict.xitems(): > > > > then you are screwed if you try to replace dict with any kind of > > user-implemented dictionary-like replacement (since you'd have to > > go back and implement the xitems() method on everything). > > Why is that ? You'd just have to add .xitems() to UserDict ...and cgi.FieldStorage, and dumbdbm._Database, and rfc822.Message, and shelve.Shelf, and bsddbmodule, and dbmmodule, and gdbmmodule, to name a few. Even if you expect (or force) people to derive all their dictionary-like Python classes from UserDict (which they don't, in practice), you can't derive C objects from UserDict. > > for (key, value) in dict.items(): > > > > then now you are screwed if dict is a built-in dictionary, since > > items() is supposed to construct a list, not an iterator. > > I'm not breaking backward compatibility -- the above will still > work like it has before since lists don't have the tp_nextitem > slot. What i mean is that Python programmers would no longer know how to write their 'for' loops. Should they use 'xitems', thus dooming their loop never to work with the majority of user-implemented mapping-like objects? Or should they use 'items', thus dooming their loop to run inefficiently on built-in dictionaries? > > We want this feature to smoothly extend and work with existing objects > > with a minimum of rewriting, ideally none. PEP 234 achieves this ideal. > > Again, you are trying to achieve forward compatibility. If people > want better performance, than they will have to add new functionality > to their types -- one way or another. Okay, i agree, it's forward compatibility. But it's something worth going for when you're trying to come up with a protocol. -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From thomas at xs4all.net Tue Feb 6 13:44:47 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 13:44:47 +0100 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <002201c08fa9$079a1f80$770a0a0a@nevex.com>; from gvwilson@ca.baltimore.com on Mon, Feb 05, 2001 at 02:22:50PM -0500 References: 
                              
                              <002201c08fa9$079a1f80$770a0a0a@nevex.com> Message-ID: <20010206134447.D9551@xs4all.nl> On Mon, Feb 05, 2001 at 02:22:50PM -0500, Greg Wilson wrote: > OK, now here's the hard one. Clearly, Noshit. I ran into all of this while trying to figure out how to quick-hack implement it. My brain exploded while trying to grasp all implications, which is why I've been quiet on this issue -- I'm healing ;-P > (a) for i in someList: > has to continue to mean "iterate over the values". We've agreed that: > (b) for k:v in someDict: means "iterate through the items". (a) looks > like a special case of (b). I'm still not sure if I like the special syntax to iterate over dictionaries. Are we talking about iterators, or about special syntax to use said iterators in the niche application of dicts and mapping interfaces ? :) > I therefore asked my colleagues to guess what: > (c) for x in someDict: > did. They all said, "Iterates through the _values_ in the dict", > by analogy with (a). But how baffled were they when it didn't do what they expected it to do ? Did they go, 'oh shit, now what' ? > I then asked, "How do you iterate through the keys in a dict, or > the indices in a list?" They guessed: > (d) for x: in someContainer: Again, how baffled were they when you said it wasn't going to work ? Because (c) and (d) are just very light syntactic powdered sugar substitutes for 'k:v' where you just don't use one of the two. The extra name binding operation isn't going to cost you enough to really worry about, IMHO. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tismer at tismer.com Tue Feb 6 13:51:37 2001 From: tismer at tismer.com (Christian Tismer) Date: Tue, 06 Feb 2001 13:51:37 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: 
                              
                              <3A7FE60A.261CEE6A@lemburg.com> Message-ID: <3A7FF359.665C184B@tismer.com> "M.-A. Lemburg" wrote: > > Ka-Ping Yee wrote: 
                               > > Three points: > > > > 1. We have syntactic support for mapping creation and lookup, > > and syntactic support for mapping iteration should mirror it. > > > > 2. IMHO > > > > for key:value in dict: > > > > is much easier to read and explain than > > > > for (key, value) in dict.xitems(): > > > > (Greg? Could you test this claim with a survey question?) > > > > To the newcomer, the former is easy to understand at a surface > > level. The latter exposes the implementation (an implementation > > that is still there in PEP 234, but that the programmer only has > > to worry about if they are going deeper and writing custom > > iteration behaviour). This separates the work of learning into > > two small, digestible pieces. > > Tuples are well-known basic Python types. Why should > (key,value) be any harder to understand than key:value. > What would you tell a newbie that writes: > > for key:value in sequence: > .... > > where sequence is a list of tuples and finds that this doesn't > work ? Sorry about sneaking in. I do in fact think that the syntax addition of key:value is easier to understand. Beginners know the { key:value } syntax, so this is just natural. Givin him an error in your above example is a step to clarity, avoiding hard to find errors if somebody has a list of tuples and the above happens to work somehow, although he forgot to use .xitems(). > Besides, the items() method has been around for ages, so switching > from .items() to .xitems() in programs will be just as easy as > switching from range() to xrange(). It has been around for years, but key:value might be better. A little faster for sure since we don't build extra tuples. > I am -0 on the key:value thingie. If you want it as a way to > construct or split associations, fine. But it is really not > necessary to be able to iterate over dictionaries. > > > 3. Furthermore, this still doesn't solve the backward-compatibility > > problem that PEP 234 takes great care to address! If you write > > your for-loops > > > > for (key, value) in dict.xitems(): > > > > then you are screwed if you try to replace dict with any kind of > > user-implemented dictionary-like replacement (since you'd have to > > go back and implement the xitems() method on everything). > > Why is that ? You'd just have to add .xitems() to UserDict and > be done with it. This is how we have added new dictionary methods > all along. I don't see your point here. You really wouldn't stick with UserDict, but implement this on every object for speed. The key:value proposal is not only stronger through its extra syntactical strength, it is also smaller in code-size to implement. Having to force every "iterable" object to support a modified view of it via xitems() even doesn't look elegant to me. It forces key/value pairs to go through tupleization only for syntactical reasons. A weakness, not a strength. Object orientation gets at its limits here. If access to keys and values can be provided by a single implementation for all affected objects without adding new methods, this suggests to me that it is right to do so. +1 on key:value - ciao - chris -- Christian Tismer :^) 
                              
                              Mission Impossible 5oftware : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF where do you want to jump today? http://www.stackless.com From gvwilson at ca.baltimore.com Tue Feb 6 14:00:26 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Tue, 6 Feb 2001 08:00:26 -0500 (EST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: 
                              
                              Message-ID: 
                              
                              > > > > On Mon, 5 Feb 2001, Greg Wilson wrote: > > > > Based on my very-informal survey, if: > > > > for i in someList: > > > > works, then many people will assume that: > > > > for i in someDict: > > > > will also work, and yield values. > > > Ka-Ping Yee: > > > ...the latter is ambiguous (keys or values?)... > > Greg Wilson > > The latter is exactly as ambiguous as the former... I think this > > is a case where your (intimate) familiarity with the way Python > > works now is preventing you from getting into newbie headspace... > Ka-Ping Yee: > No, i don't think so. It seems quite possible to argue from first > principles that if you ask to iterate over things "in" a sequence, > you clearly want the items in the sequence, not their integer indices. Greg Wilson: Well, arguing from first principles, Aristotle was able to demonstrate that heavy objects fall faster than light ones :-). I'm basing my claim on the kind of errors students in my course make. Even after being shown half-a-dozen examples of Python for loops, many of them write: for i in someSequence: print someSequence[i] in their first exercise. Thanks, Greg From mal at lemburg.com Tue Feb 6 14:16:22 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 14:16:22 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: 
                              
                              Message-ID: <3A7FF926.BBFB3E99@lemburg.com> Ka-Ping Yee wrote: > > On Tue, 6 Feb 2001, M.-A. Lemburg wrote: > > > For the third time: write an example, please. It will help a lot. > > > > Ping, what do you need an example for ? The above sentence says > > it all: > > *sigh* I give up. I'm not going to ask again. > > Real examples are a good idea when considering any proposal. > > (a) When you do a real example, you usually discover > mistakes or things you didn't think of in your design. > > (b) We can compare it directly to other examples to see > how easy or hard it is to write and understand code > that uses the new protocol. > > (c) We can come up with interesting cases in practice to > see if there are limitations in any proposal. > > Now that you have a proposal in slightly more detail, a few > missing pieces are evident. > > How would you implement a *Python* class that supports iteration? > For instance, write something that has the effect of the FileLines > class in PEP 234. I was just throwing in ideas, not a complete proposal. If that's what you want I can write up a complete proposal too and maybe even a patch to go with it. Exposing the tp_nextitem slot in Python classes via a __nextitem__ slot wouldn't be much of a problem. What I wanted to get across is the general idea behind my view of an iteration API and I believe that this idea has been made clear: I want a low-level API and move all the complicated object specific details into separate iterator objects. I don't see a point in trying to add complicated machinery to Python just to be able to iterate fast over some of the builtin types by special casing each object type. Let's please not add more special cases to the core. > How would you implement an object that can be iterated over more > than once, at the same time or at different times? It's not clear > to me how the single tp_nextitem slot can handle that. Put all that logic into the iterator objects. These can be as complicated as needed, either trying to work in generic ways, special cased for some builtin types or be specific to a single type. > > Since the for-loop can avoid creating temporary integers, > > iterations will generally run a lot faster than before. Also, > > iterators have access to the object's internal representation, > > so data access is also faster. > > Again, completely orthogonal to both proposals. Regardless of > the protocol, if you're implementing the iterator in C, you can > use raw integers and internal access to make it fast. > > > > 2. IMHO > > > > > > for key:value in dict: > > > > > > is much easier to read and explain than > > > > > > for (key, value) in dict.xitems(): > [...] > > Tuples are well-known basic Python types. Why should > > (key,value) be any harder to understand than key:value. > > It's mainly the business of calling the method and rearranging > the data that i'm concerned about. > > Example 1: > > dict = {1: 2, 3: 4} > for (key, value) in dict.items(): > > Explanation: > > The "items" method on the dict converts {1: 2, 3: 4} into > a list of 2-tuples, [(1, 2), (3, 4)]. Then (key, value) is > matched against each item of this list, and the two parts > of each tuple are unpacked. > > Example 2: > > dict = {1: 2, 3: 4} > for key:value in dict: > > Explanation: > > The "for" loop iterates over the key:value pairs in the > dictionary, which you can see are 1:2 and 3:4. Again, if you prefer the key:value notation, fine. This is orthogonal to the iteration API though and really only touches the case of mappings. > > Besides, the items() method has been around for ages, so switching > > from .items() to .xitems() in programs will be just as easy as > > switching from range() to xrange(). > > It's not the same. xrange() is a built-in function that you call; > xitems() is a method that you have to *implement*. You can put all that special logic into special iterators, e.g. a xitems iterator (see the end of my post). > > > for (key, value) in dict.xitems(): > > > > > > then you are screwed if you try to replace dict with any kind of > > > user-implemented dictionary-like replacement (since you'd have to > > > go back and implement the xitems() method on everything). > > > > Why is that ? You'd just have to add .xitems() to UserDict > > ...and cgi.FieldStorage, and dumbdbm._Database, and rfc822.Message, > and shelve.Shelf, and bsddbmodule, and dbmmodule, and gdbmmodule, > to name a few. Even if you expect (or force) people to derive all > their dictionary-like Python classes from UserDict (which they don't, > in practice), you can't derive C objects from UserDict. The same applies to your proposed interface: people will have to write new code in order to be able to use the new technology. I don't see that as a problem, though. > > > for (key, value) in dict.items(): > > > > > > then now you are screwed if dict is a built-in dictionary, since > > > items() is supposed to construct a list, not an iterator. > > > > I'm not breaking backward compatibility -- the above will still > > work like it has before since lists don't have the tp_nextitem > > slot. > > What i mean is that Python programmers would no longer know how to > write their 'for' loops. Should they use 'xitems', thus dooming > their loop never to work with the majority of user-implemented > mapping-like objects? Or should they use 'items', thus dooming > their loop to run inefficiently on built-in dictionaries? Hey, people who care will be aware of this difference. It is very easy to test for interfaces in Python, so detecting the best method (in case it matters) is simple. > > > We want this feature to smoothly extend and work with existing objects > > > with a minimum of rewriting, ideally none. PEP 234 achieves this ideal. > > > > Again, you are trying to achieve forward compatibility. If people > > want better performance, than they will have to add new functionality > > to their types -- one way or another. > > Okay, i agree, it's forward compatibility. But it's something > worth going for when you're trying to come up with a protocol. Sure, but is adding special cases everywhere really worth it ? From mal at lemburg.com Tue Feb 6 14:26:26 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 14:26:26 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: 
                              
                              <3A7FE60A.261CEE6A@lemburg.com> <3A7FF359.665C184B@tismer.com> Message-ID: <3A7FFB82.30BE0703@lemburg.com> Christian Tismer wrote: > > "M.-A. Lemburg" wrote: > > > > Tuples are well-known basic Python types. Why should > > (key,value) be any harder to understand than key:value. > > What would you tell a newbie that writes: > > > > for key:value in sequence: > > .... > > > > where sequence is a list of tuples and finds that this doesn't > > work ? > > Sorry about sneaking in. I do in fact think that the syntax > addition of key:value is easier to understand. Beginners > know the { key:value } syntax, so this is just natural. > Givin him an error in your above example is a step to clarity, > avoiding hard to find errors if somebody has a list of > tuples and the above happens to work somehow, although he > forgot to use .xitems(). The problem is that key:value in sequence has a meaning under PEP234: key is the current index, value the tuple. > > Besides, the items() method has been around for ages, so switching > > from .items() to .xitems() in programs will be just as easy as > > switching from range() to xrange(). > > It has been around for years, but key:value might be better. > A little faster for sure since we don't build extra tuples. Small tuples are cheap and kept on the free list. I don't even think that key:value can do without them. Anyway, I've already said that I'm -0 on these thingies -- I would be +1 if Ping were to make key:value full flavoured associations (Jim Fulton has written a lot about these some years ago; I think they originated from SmallTalk). > > I am -0 on the key:value thingie. If you want it as a way to > > construct or split associations, fine. But it is really not > > necessary to be able to iterate over dictionaries. > > > > > 3. Furthermore, this still doesn't solve the backward-compatibility > > > problem that PEP 234 takes great care to address! If you write > > > your for-loops > > > > > > for (key, value) in dict.xitems(): > > > > > > then you are screwed if you try to replace dict with any kind of > > > user-implemented dictionary-like replacement (since you'd have to > > > go back and implement the xitems() method on everything). > > > > Why is that ? You'd just have to add .xitems() to UserDict and > > be done with it. This is how we have added new dictionary methods > > all along. I don't see your point here. > > You really wouldn't stick with UserDict, but implement this > on every object for speed. > The key:value proposal is not only stronger through its extra > syntactical strength, it is also smaller in code-size to implement. ...but it's a special case which we don't really need and it *only* works for mappings and then only if the mapping supports the new slots and methods required by PEP234. I don't buy the argument that PEP234 buys us fast iteration for free. Programmers will still have to write the code to implement the new slots and methods. > Having to force every "iterable" object to support a modified > view of it via xitems() even doesn't look elegant to me. > It forces key/value pairs to go through tupleization only > for syntactical reasons. A weakness, not a strength. > Object orientation gets at its limits here. If access to keys > and values can be provided by a single implementation for > all affected objects without adding new methods, this suggests > to me that it is right to do so. Hey, tuples are created for *every* function call, even C calls -- you can't be serious about getting much of a gain here ;-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tismer at tismer.com Tue Feb 6 14:43:31 2001 From: tismer at tismer.com (Christian Tismer) Date: Tue, 06 Feb 2001 14:43:31 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: 
                              
                              <3A7FE60A.261CEE6A@lemburg.com> <3A7FF359.665C184B@tismer.com> <3A7FFB82.30BE0703@lemburg.com> Message-ID: <3A7FFF83.28FAB74F@tismer.com> "M.-A. Lemburg" wrote: > > Christian Tismer wrote: > > > > "M.-A. Lemburg" wrote: > > > > > > Tuples are well-known basic Python types. Why should > > > (key,value) be any harder to understand than key:value. > > > What would you tell a newbie that writes: > > > > > > for key:value in sequence: > > > .... > > > > > > where sequence is a list of tuples and finds that this doesn't > > > work ? > > > > Sorry about sneaking in. I do in fact think that the syntax > > addition of key:value is easier to understand. Beginners > > know the { key:value } syntax, so this is just natural. > > Givin him an error in your above example is a step to clarity, > > avoiding hard to find errors if somebody has a list of > > tuples and the above happens to work somehow, although he > > forgot to use .xitems(). > > The problem is that key:value in sequence has a meaning under PEP234: > key is the current index, value the tuple. Why is this a problem? It is just fine. > > > Besides, the items() method has been around for ages, so switching > > > from .items() to .xitems() in programs will be just as easy as > > > switching from range() to xrange(). > > > > It has been around for years, but key:value might be better. > > A little faster for sure since we don't build extra tuples. > > Small tuples are cheap and kept on the free list. I don't even > think that key:value can do without them. a) I don't see a point to tell me about Python's implementation but for hair-splitting. Speed is not the point, it will just be faster. b) I think it can. But the point is the cleaner syntax which unambigously gets you keys and values, whenether the thing on the right can be indexed. > Anyway, I've already said that I'm -0 on these thingies -- I would > be +1 if Ping were to make key:value full flavoured associations > (Jim Fulton has written a lot about these some years ago; I think > they originated from SmallTalk). I didn't read that yet. Would it contradict Ping's version or could it be extended laer? ... > > Having to force every "iterable" object to support a modified > > view of it via xitems() even doesn't look elegant to me. > > It forces key/value pairs to go through tupleization only > > for syntactical reasons. A weakness, not a strength. > > Object orientation gets at its limits here. If access to keys > > and values can be provided by a single implementation for > > all affected objects without adding new methods, this suggests > > to me that it is right to do so. > > Hey, tuples are created for *every* function call, even C calls > -- you can't be serious about getting much of a gain here ;-) You are reducing my arguments to speed always, not me. I don't care about a tuple. But I think we can save code. Smaller *and* not slower is what I like. no offence - ly y'rs - chris -- Christian Tismer :^) 
                              
                              Mission Impossible 5oftware : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF where do you want to jump today? http://www.stackless.com From mal at lemburg.com Tue Feb 6 14:57:14 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 14:57:14 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: 
                              
                              <3A7FE60A.261CEE6A@lemburg.com> <3A7FF359.665C184B@tismer.com> <3A7FFB82.30BE0703@lemburg.com> <3A7FFF83.28FAB74F@tismer.com> Message-ID: <3A8002BA.5A0EEDE9@lemburg.com> Christian Tismer wrote: > > "M.-A. Lemburg" wrote: > > > > Besides, the items() method has been around for ages, so switching > > > > from .items() to .xitems() in programs will be just as easy as > > > > switching from range() to xrange(). > > > > > > It has been around for years, but key:value might be better. > > > A little faster for sure since we don't build extra tuples. > > > > Small tuples are cheap and kept on the free list. I don't even > > think that key:value can do without them. > > a) I don't see a point to tell me about Python's implementation > but for hair-splitting. I'm not telling you (I know you know ;), but others on this list which may not be aware of this fact. > Speed is not the point, it will just be > faster. b) I think it can. > But the point is the cleaner syntax which unambigously gets > you keys and values, whenether the thing on the right can be indexed. > > > Anyway, I've already said that I'm -0 on these thingies -- I would > > be +1 if Ping were to make key:value full flavoured associations > > (Jim Fulton has written a lot about these some years ago; I think > > they originated from SmallTalk). > > I didn't read that yet. Would it contradict Ping's version or > could it be extended laer? Ping's version would hide this detail under the cover: dictionaries would sort of implement the sequence protocol and then return associations. I don't think this is much of a problem though. > ... > > > Having to force every "iterable" object to support a modified > > > view of it via xitems() even doesn't look elegant to me. > > > It forces key/value pairs to go through tupleization only > > > for syntactical reasons. A weakness, not a strength. > > > Object orientation gets at its limits here. If access to keys > > > and values can be provided by a single implementation for > > > all affected objects without adding new methods, this suggests > > > to me that it is right to do so. > > > > Hey, tuples are created for *every* function call, even C calls > > -- you can't be serious about getting much of a gain here ;-) > > You are reducing my arguments to speed always, not me. > I don't care about a tuple. But I think we can save > code. Smaller *and* not slower is what I like. At the cost of: * special casing the for-loop implementation for sequences, mappings * adding half a dozen new slots and methods * moving all the complicated details into the for-loop implementation instead of keeping them in separate modules or object specific implementations Perhaps we are just discussing the wrong things: I believe that Ping's PEP could easily be implemented on top of my idea (or vice-versa depending on how you look at it) of how the iteration interface should look like. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From paulp at ActiveState.com Tue Feb 6 15:44:12 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Tue, 06 Feb 2001 06:44:12 -0800 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> Message-ID: <3A800DBC.2BE8ECEF@ActiveState.com> "M.-A. Lemburg" wrote: > > [pre-PEP] > > You have a lot of good points in there (also some inaccuracies) and > I agree that Python should move to using Unicode for text data > and arrays for binary data. That's my primary goal. If we can all agree that is the goal then we can start to design new features with that mind. I'm overjoyed to have you on board. I'm pretty sure Fredrick agrees with the goals (probably not every implementation detail). I'll send to i18n sig and see if I can get buy-in from Andy Robinson et. al. Then it's just Guido. > Some things you may be missing though is that Python already > has support for a few features you mention, e.g. codecs.open() > provide more or less what you have in mind with fopen() and > the compiler can already unify Unicode and string literals using > the -U command line option. The problem with unifying string literals without unifying string *types* is that many functions probably check for and type("") not type(u""). > What you don't talk about in the PEP is that Python's stdlib isn't > even Unicode aware yet, and whatever unification steps we take, > this project will have to preceed it. I'm not convinced that is true. We should be able to figure it out quickly though. > The problem with making the > stdlib Unicode aware is that of deciding which parts deal with > text data or binary data -- the code sometimes makes assumptions > about the nature of the data and at other times it simply doesn't > care. Can you give an example? If the new string type is 100% backwards compatible in every way with the old string type then the only code that should break is silly code that did stuff like: try: something = chr( somethingelse ) except ValueError: print "Unicode is evil!" Note that I expect types.StringType == types(chr(10000)) etc. > In this light I think you ought to focus Python 3k with your > PEP. This will also enable better merging techniques due to the > lifting of the type/class difference. Python3K is a beautiful dream but we have problems we need to solve today. We could start moving to a Unicode future in baby steps right now. Your "open" function could be moved into builtins as "fopen". Python's "binary" open function could be deprecated under its current name and perhaps renamed. The sooner we start the sooner we finish. You and /F laid some beautiful groundwork. Now we just need to keep up the momentum. I think we can do this without a big backwards compatibility earthquake. VB and TCL figured out how to do it... Paul Prescod From thomas at xs4all.net Tue Feb 6 15:57:12 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 15:57:12 +0100 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: <20010205110422.A5893@glacier.fnational.com>; from nas@arctrix.com on Mon, Feb 05, 2001 at 11:04:22AM -0800 References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> <200102051837.NAA00833@cj20424-a.reston1.va.home.com> <20010205110422.A5893@glacier.fnational.com> Message-ID: <20010206155712.E9551@xs4all.nl> On Mon, Feb 05, 2001 at 11:04:22AM -0800, Neil Schemenauer wrote: > On Mon, Feb 05, 2001 at 01:37:39PM -0500, Guido van Rossum wrote: > > Now, can you do things like this: > [example cut] > No, it would have to be written like this: > >>> from types import * > >>> class MyInt(IntType): # add a method > def add1(self): return self.value+1 Why ? Couldn't IntType do with an __add__[*] method that does this ugly magic for you ? Same for __sub__, __int__ and so on. *] Yes, yes, I know, it's a type, not a class, but you know what I mean :) -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mal at lemburg.com Tue Feb 6 16:09:46 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 16:09:46 +0100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> Message-ID: <3A8013BA.2FF93E8B@lemburg.com> Paul Prescod wrote: > > "M.-A. Lemburg" wrote: > > > > [pre-PEP] > > > > You have a lot of good points in there (also some inaccuracies) and > > I agree that Python should move to using Unicode for text data > > and arrays for binary data. > > That's my primary goal. If we can all agree that is the goal then we can > start to design new features with that mind. I'm overjoyed to have you > on board. I'm pretty sure Fredrick agrees with the goals (probably not > every implementation detail). I'll send to i18n sig and see if I can get > buy-in from Andy Robinson et. al. Then it's just Guido. Oh, I think that everybody agrees on moving to Unicode as basic text storage container. The question is how to get there ;-) Today we are facing a problem in that strings are also used as containers for binary data and no distinction is made between the two. We also have to watch out for external interfaces which still use 8-bit character data, so there's a lot ahead. > > Some things you may be missing though is that Python already > > has support for a few features you mention, e.g. codecs.open() > > provide more or less what you have in mind with fopen() and > > the compiler can already unify Unicode and string literals using > > the -U command line option. > > The problem with unifying string literals without unifying string > *types* is that many functions probably check for and type("") not > type(u""). Well, with -U on, Python will compile "" into u"", so you can already test Unicode compatibility today... last I tried, Python didn't even start up :-( > > What you don't talk about in the PEP is that Python's stdlib isn't > > even Unicode aware yet, and whatever unification steps we take, > > this project will have to preceed it. > > I'm not convinced that is true. We should be able to figure it out > quickly though. We can use that knowledge to base future design upon. The problem with many stdlib modules is that they don't make a difference between text and binary data (and often can't, e.g. take sockets), so we'll have to figure out a way to differentiate between the two. We'll also need an easy-to-use binary data type -- as you mention in the PEP, we could take the old string implementation as basis and then perhaps turn u"" into "" and use b"" to mean what "" does now (string object). > > The problem with making the > > stdlib Unicode aware is that of deciding which parts deal with > > text data or binary data -- the code sometimes makes assumptions > > about the nature of the data and at other times it simply doesn't > > care. > > Can you give an example? If the new string type is 100% backwards > compatible in every way with the old string type then the only code that > should break is silly code that did stuff like: > > try: > something = chr( somethingelse ) > except ValueError: > print "Unicode is evil!" > > Note that I expect types.StringType == types(chr(10000)) etc. Sure, but there are interfaces which don't differentiate between text and binary data, e.g. many IO-operations don't care about what exactly they are writing or reading. We'd probably define a new set of text data APIs (meaning methods) to make this difference clear and visible, e.g. .writetext() and .readtext(). > > In this light I think you ought to focus Python 3k with your > > PEP. This will also enable better merging techniques due to the > > lifting of the type/class difference. > > Python3K is a beautiful dream but we have problems we need to solve > today. We could start moving to a Unicode future in baby steps right > now. Your "open" function could be moved into builtins as "fopen". > Python's "binary" open function could be deprecated under its current > name and perhaps renamed. Hmm, I'd prefer to keep things separate for a while and then switch over to new APIs once we get used to them. > The sooner we start the sooner we finish. You and /F laid some beautiful > groundwork. Now we just need to keep up the momentum. I think we can do > this without a big backwards compatibility earthquake. VB and TCL > figured out how to do it... ... and we should probably try to learn from them. They have put a considerable amount of work into getting the low-level interfacing issues straight. It would be nice if we could avoid adding more conversion magic... -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From Barrett at stsci.edu Tue Feb 6 16:33:34 2001 From: Barrett at stsci.edu (Paul Barrett) Date: Tue, 6 Feb 2001 10:33:34 -0500 (EST) Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: 
                              
                              References: <20010205170106.D990@thrak.cnri.reston.va.us> 
                              
                              Message-ID: <14976.5900.472169.467422@nem-srvr.stsci.edu> Tim Peters writes: > > About people not moving to 2.0, the single specific reason I hear most often > hinges on presumed lack of GPL compatibility. But then people worried about > that *have* a specific reason stopping them. For everyone else, I know > sysadmins who still refuse to move up from Perl 4. > > BTW, we recorded thousands of downloads of 2.0 betas at BeOpen.com, and > indeed more than 10,000 of the Windows installer alone. Then their download > stats broke. SF's have been broken for a long time. So while we have no > idea how many people are downloading now, the idea that people stayed away > from 2.0 in droves is wrong. And 2.0-specific examples are common on c.l.py > now from lots of people too. I agree. I think people are moving to 2.0, but not at the rate of keeping-up with the current release cycle. By the time 2/3 of them have installed 2.0, 2.1 will be released. So what's the point of installing 2.0, when a few weeks later, you have to install 2.1? The situation at our institution is a good indicator of this: 2.0 becomes the default this week. -- Dr. Paul Barrett Space Telescope Science Institute Phone: 410-338-4475 ESS/Science Software Group FAX: 410-338-4767 Baltimore, MD 21218 From paulp at ActiveState.com Tue Feb 6 16:54:49 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Tue, 06 Feb 2001 07:54:49 -0800 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> <3A8013BA.2FF93E8B@lemburg.com> Message-ID: <3A801E49.F8DF70E2@ActiveState.com> "M.-A. Lemburg" wrote: > > ... > > Oh, I think that everybody agrees on moving to Unicode as > basic text storage container. The last time we went around there was an anti-Unicode faction who argued that adding Unicode support was fine but making it the default would inconvenience Japanese users. > ... > Well, with -U on, Python will compile "" into u"", so you can > already test Unicode compatibility today... last I tried, Python > didn't even start up :-( I'm going to say again that I don't see that as a test of Unicode-compatibility. It is a test of compatibility with our existing Unicode object. If we simply allowed string objects to support higher character numbers I *cannot see* how that could break existing code. > ... > We can use that knowledge to base future design upon. The problem > with many stdlib modules is that they don't make a difference > between text and binary data (and often can't, e.g. take sockets), > so we'll have to figure out a way to differentiate between the > two. We'll also need an easy-to-use binary data type -- as you > mention in the PEP, we could take the old string implementation > as basis and then perhaps turn u"" into "" and use b"" to mean > what "" does now (string object). I agree that we need all of this but I strongly disagree that there is any dependency relationship between improving the Unicode-awareness of I/O routines (sockets and files) and allowing string objects to support higher character numbers. I claim that allowing higher character numbers in strings will not break socket objects. It might simply be the case that for a while socket objects never create these higher charcters. Similarly, we could improve socket objects so that they have different readtext/readbinary and writetext/writebinary without unifying the string objects. There are lots of small changes we can make without breaking anything. One I would like to see right now is a unification of chr() and unichr(). We are just making life harder for ourselves by walking further and further down one path when "everyone agrees" that we are eventually going to end up on another path. > ... It would be nice if we could avoid > adding more conversion magic... We already have more "magic" in our conversions than we need. I don't think I'm proposing any new conversions. Paul Prescod From ping at lfw.org Tue Feb 6 17:59:04 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 08:59:04 -0800 (PST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Tue, 6 Feb 2001, Greg Wilson wrote: > I'm basing my claim on the kind > of errors students in my course make. Even after being shown half-a-dozen > examples of Python for loops, many of them write: > > for i in someSequence: > print someSequence[i] > > in their first exercise. Amazing (to me). Thank you for this data point; it's news to me. I don't know what that means we should do, though. We can't break the way existing loops work. What would make for-loops easier to present, given this experience? -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From gvwilson at ca.baltimore.com Tue Feb 6 18:28:59 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Tue, 6 Feb 2001 12:28:59 -0500 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: 
                              
                              Message-ID: <001101c09062$4af68ac0$770a0a0a@nevex.com> > On Tue, 6 Feb 2001, Greg Wilson wrote: > > Even after being shown half-a-dozen > > examples of Python for loops, many of them write: > > > > for i in someSequence: > > print someSequence[i] > > > > in their first exercise. > Ka-Ping Yee: > Amazing (to me). Thank you for this data point; it's news to me. Greg Wilson: To be fair, these are all people with some previous programming experience --- I suspect (no proof) that Fortran/C/Java have trained them to think that iteration is over index space, rather than value space. It'd be interesting to check the intuitions of students who'd been raised on the C++ STL's iterators, but I don't think that'll ever be possible --- C++ seems to be dropping out of the undergrad curriculum in favor of Java. By the way, I do *not* think this is a knock-down argument against your proposal --- it's no more of a wart than needing the trailing comma in singleton tuples like "(3,)". However: 1. Special cases make teaching harder (he said, repeating the obvious yet again). 2. I expect that if it was added, the "traditional" for-loop syntax would eventually fall into disfavor, since people who want to write really general functions over collections would have to use the new syntax. Thanks, Greg p.s. in case no-one has said it, or I've missed it, thanks very much for putting the PEP together so quickly, and for bringing so many of the issues into focus. From fredrik at effbot.org Tue Feb 6 18:41:55 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 6 Feb 2001 18:41:55 +0100 Subject: [Python-Dev] Fw: list.index(..) -> TypeError bug or feature? Message-ID: <01c601c09065$260bad50$e46940d5@hagrid> (from comp.lang.python) can this be fixed? should this be fixed? (please?)  ----- Original Message ----- From: "Pearu Peterson" 
                              
                              Newsgroups: comp.lang.python Sent: Tuesday, February 06, 2001 2:42 PM Subject: list.index(..) -> TypeError bug or feature? > > In Python 2.1a2 I get TypeError exception from list index() method even if > the list contains given object: > > >>> from gmpy import mpz > >>> a = [mpz(1),[]] > >>> a.index([]) > Traceback (most recent call last): > File "
                              
                              ", line 1, in ? > TypeError: coercion to gmpy.mpz type failed > > while in Python 2.0b2 it works: > > >>> a = [mpz(1),[]] > >>> a.index([]) > 1 > > Is this Python 2.1a2 bug or gmpy bug? Or my bug and Python 2.1 feature? > > Thanks, > Pearu From mal at lemburg.com Tue Feb 6 19:01:58 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 19:01:58 +0100 Subject: [Python-Dev] Fw: list.index(..) -> TypeError bug or feature? References: <01c601c09065$260bad50$e46940d5@hagrid> Message-ID: <3A803C16.7121C9B8@lemburg.com> Fredrik Lundh wrote: > > (from comp.lang.python) > > can this be fixed? should this be fixed? (please?) Depends on whether gmpy (what is this, BTW) uses the old coercion mechanism correctly or not which is hard to say from here ;) Also, was gmpy recompiled for 2.1a2 and which part raised the exception (Python or gmpy) ? In any case, I'd say that .index() should not raise TypeErrors in case coercion fails. >  > > ----- Original Message ----- > From: "Pearu Peterson" 
                              
                              > Newsgroups: comp.lang.python > Sent: Tuesday, February 06, 2001 2:42 PM > Subject: list.index(..) -> TypeError bug or feature? > > > > > In Python 2.1a2 I get TypeError exception from list index() method even if > > the list contains given object: > > > > >>> from gmpy import mpz > > >>> a = [mpz(1),[]] > > >>> a.index([]) > > Traceback (most recent call last): > > File "
                              
                              ", line 1, in ? > > TypeError: coercion to gmpy.mpz type failed > > > > while in Python 2.0b2 it works: > > > > >>> a = [mpz(1),[]] > > >>> a.index([]) > > 1 > > > > Is this Python 2.1a2 bug or gmpy bug? Or my bug and Python 2.1 feature? > > > > Thanks, > > Pearu > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From nas at arctrix.com Tue Feb 6 19:06:09 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Tue, 6 Feb 2001 10:06:09 -0800 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: <20010206155712.E9551@xs4all.nl>; from thomas@xs4all.net on Tue, Feb 06, 2001 at 03:57:12PM +0100 References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> <200102051837.NAA00833@cj20424-a.reston1.va.home.com> <20010205110422.A5893@glacier.fnational.com> <20010206155712.E9551@xs4all.nl> Message-ID: <20010206100609.B7790@glacier.fnational.com> On Tue, Feb 06, 2001 at 03:57:12PM +0100, Thomas Wouters wrote: > Why ? Couldn't IntType do with an __add__[*] method that does this ugly magic > for you ? Same for __sub__, __int__ and so on. You're right. I'm pretty sure my modified interpreter would handle "return self+1" just fine (I can't test it right now). If you wanted to override the __add__ method you would have to write "return IntType.__add__(self, 1)". Neil From pearu at cens.ioc.ee Tue Feb 6 19:52:38 2001 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 6 Feb 2001 20:52:38 +0200 (EET) Subject: [Python-Dev] Fw: list.index(..) -> TypeError bug or feature? In-Reply-To: <3A803C16.7121C9B8@lemburg.com> Message-ID: 
                              
                              On Tue, 6 Feb 2001, M.-A. Lemburg wrote: > Fredrik Lundh wrote: > > > > (from comp.lang.python) > > > > can this be fixed? should this be fixed? (please?) > > Depends on whether gmpy (what is this, BTW) uses the old coercion > mechanism correctly or not which is hard to say from here ;) About gmpy, see http://gmpy.sourceforge.net/ > Also, was gmpy recompiled for 2.1a2 and which part raised the > exception (Python or gmpy) ? gmpy was recompiled for 2.1a2, though the same gmpy worked fine with 2.0b2. The exception was raised in gmpy part. > In any case, I'd say that .index() should not raise TypeErrors > in case coercion fails. I fixed this in gmpy source --- there the Pymp*_coerce functions raised an exception instead of returning `1' when coerce failed. So, this was gmpy bug, Python 2.1a2 just revealed it. Regards, Pearu From esr at snark.thyrsus.com Tue Feb 6 20:06:00 2001 From: esr at snark.thyrsus.com (Eric S. Raymond) Date: Tue, 6 Feb 2001 14:06:00 -0500 Subject: [Python-Dev] fp vs. fd Message-ID: <200102061906.f16J60x11156@snark.thyrsus.com> There are a number of places in the Python library that require a numeric file descriptor, rather than a file object. This complicates code slightly and (IMO) breaches the wrapper around the file-object abstraction (which Guido says is only supposed to depend on stdio-level stuff). Are there design reasons for this, or is it historical accident? If the latter, I'll go through and fix these to accept either an fd or an fp. And fix the docs, too. -- 
                              Eric S. Raymond Non-cooperation with evil is as much a duty as cooperation with good. -- Mohandas Gandhi From ping at lfw.org Tue Feb 6 20:01:03 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 11:01:03 -0800 (PST) Subject: [Python-Dev] fp vs. fd In-Reply-To: <200102061906.f16J60x11156@snark.thyrsus.com> Message-ID: 
                              
                              On Tue, 6 Feb 2001, Eric S. Raymond wrote: > There are a number of places in the Python library that require a > numeric file descriptor, rather than a file object. I'm curious... where? -- ?!ng From ping at lfw.org Tue Feb 6 20:00:02 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 11:00:02 -0800 (PST) Subject: [Python-Dev] Coercion and comparisons In-Reply-To: <01c601c09065$260bad50$e46940d5@hagrid> Message-ID: 
                              
                              On Tue, 6 Feb 2001, Fredrik Lundh wrote: > > can this be fixed? should this be fixed? (please?) I'm not sure. The gmpy example: > > >>> a = [mpz(1),[]] > > >>> a.index([]) > > Traceback (most recent call last): > > File "
                              
                              ", line 1, in ? > > TypeError: coercion to gmpy.mpz type failed seems to be just one case of coercion failure. I no longer have Python 2.0 in a state on my machine where i can compile gmpy to test with it, but you can perform the same exercise with the mpz module in 2.1a2: >>> import mpz >>> [mpz.mpz(1), []].index([]) Traceback (most recent call last): File "
                              
                              ", line 1, in ? TypeError: number coercion (to mpzobject) failed The following test shows that the issue is present for Python classes too: >>> class Foo: ... def __coerce__(self, other): ... raise TypeError, 'coercion failed' ... >>> f = Foo() >>> s = [3, f, 5] >>> s.index(3) 0 >>> s.index(5) Traceback (most recent call last): File "
                              
                              ", line 1, in ? File "
                              
                              ", line 3, in __coerce__ TypeError: coercion failed I get the above behaviour in 1.5.2, 2.0, and 2.1a2. So now we have to ask whether index() should hide these errors. It seems to me that conventional Python philosophy would argue to let the errors flaunt themselves as early as possible, but i agree with you that the failure to find [] in [mpz(1), []] is pretty jarring. ?? Hmm, i think perhaps the right answer is to not coerce before ==, even if we automatically coerce before the other comparison operators. But, this is only good as a future possibility. It can't resolve the issue for existing extension modules because their old-style comparison functions appear to expect two arguments of the same type: (in mpzmodule.c) static int mpz_compare(mpzobject *a, mpzobject *b) { int cmpres; /* guido sez it's better to return -1, 0 or 1 */ return (cmpres = mpz_cmp( &a->mpz, &b->mpz )) == 0 ? 0 : cmpres > 0 ? 1 : -1; } /* mpz_compare() */ ...so the error occurs before tp_compare has a chance to say "okay, it's not equal". We have to ask the authors of extension modules to implement == separately from the other comparisons. Note, by the way, that this re-raises the matter of the three kinds of equality that i remember mentioning back when we were discussing rich comparisons. I'll restate them here for you to think about. The three kinds of equality (in order by strength) are: 1. Identity. Python: 'x is y' E: 'x == y' Python: 'x is not y' E: 'x != y' Meaning: "x and y are the same object. Substituting x for y in any computation makes no difference to the result." 2. Value. Python: 'x == y' E: 'x.equals(y)' Python: 'x != y' E: '!x.equals(y)' Meaning: "x and y represent the same value. Substituting x for y in any operation that doesn't mutate x or y yields results that are ==." 3. Magnitude. Python: missing E: 'x <=> y' Python: missing E: 'x <> y' Meaning: "x and y have the same size. Another way to say this is that both x <= y and x >= y are true." Same identity implies same value; same value implies same magnitude. Category Python operators E operators identity is, is not ==, != value ==, !=, <> x.equals(y), !x.equals(y) magnitude <, <=, >, >= <, <=, >, >=, <>, <=> Each type of equality has a specific and useful meaning. Most languages, including Python, acknowledge the first two. But you can see how the coercion problem raised above is a consequence of the fact that the third category is incomplete. I like Python's spelling better than E's, though it's a small wart that there is no easy way to say or implement 'same magnitude'. (You can get around it by saying 'x <= y <= x', i suppose, but there's no real interface on the C side.) -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From esr at thyrsus.com Tue Feb 6 20:14:46 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Tue, 6 Feb 2001 14:14:46 -0500 Subject: [Python-Dev] fp vs. fd In-Reply-To: 
                              
                              ; from ping@lfw.org on Tue, Feb 06, 2001 at 11:01:03AM -0800 References: <200102061906.f16J60x11156@snark.thyrsus.com> 
                              
                              Message-ID: <20010206141446.A11212@thyrsus.com> Ka-Ping Yee 
                              
                              : > On Tue, 6 Feb 2001, Eric S. Raymond wrote: > > There are a number of places in the Python library that require a > > numeric file descriptor, rather than a file object. > > I'm curious... where? See the fctl() module. I thought this was also true of select() and poll(), but I see the docs on this are different than the last time I looked and conclude that either docs or code or both have changed. -- 
                              Eric S. Raymond No one is bound to obey an unconstitutional law and no courts are bound to enforce it. -- 16 Am. Jur. Sec. 177 late 2d, Sec 256 From fredrik at effbot.org Tue Feb 6 20:24:46 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 6 Feb 2001 20:24:46 +0100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> Message-ID: <023001c09072$77da2370$e46940d5@hagrid> Paul Prescod wrote: > I'm pretty sure Fredrick agrees with the goals (probably not every > implementation detail). haven't had time to read the pep-PEP yet, but I'm pretty sure I do. more later (when I've read it). Cheers /F From ping at lfw.org Tue Feb 6 20:24:25 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 11:24:25 -0800 (PST) Subject: [Python-Dev] Coercion and comparisons In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Tue, 6 Feb 2001, Ka-Ping Yee wrote: > Category Python operators E operators > > identity is, is not ==, != > value ==, !=, <> x.equals(y), !x.equals(y) > magnitude <, <=, >, >= <, <=, >, >=, <>, <=> > > Each type of equality has a specific and useful meaning. Most > languages, including Python, acknowledge the first two. But you > can see how the coercion problem raised above is a consequence > of the fact that the third category is incomplete. I didn't state that last sentence very well, and the table's a bit inaccurate. Rather, it would be better to say that '==' and '!=' end up having to do double duty (sometimes for value equality, sometimes for magnitude equality) -- when really '==' doesn't belong with ordering operators like '<'. It's quite a separate concept. -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From thomas at xs4all.net Tue Feb 6 20:52:53 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 20:52:53 +0100 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: 
                              
                              ; from ping@lfw.org on Tue, Feb 06, 2001 at 08:59:04AM -0800 References: 
                              
                              
                              Message-ID: <20010206205253.F9551@xs4all.nl> On Tue, Feb 06, 2001 at 08:59:04AM -0800, Ka-Ping Yee wrote: > What would make for-loops easier to present, given this experience? A simpler version of for x in range(len(sequence)): obviously :) (a.k.a. 'indexing for') One that gets taught *before* 'if x in sequence', preferably. Syntax that stands out against 'x in sequence', but makes 'x in sequence' seem very logical if encountered after the first syntax. Something like for x over sequence: or for x in 0 .. sequence: (as in) for x in 1 .. 10: or for each number x in sequence: or something or other. My gut feeling says there is a sensible and clear syntax out there, but I haven't figured it out yet :) -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From gvwilson at ca.baltimore.com Tue Feb 6 21:18:34 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Tue, 6 Feb 2001 15:18:34 -0500 Subject: [Python-Dev] re: for in dict / range literals In-Reply-To: <20010206205253.F9551@xs4all.nl> Message-ID: <000001c09079$fb86c550$770a0a0a@nevex.com> > > Ka-Ping Yee asked: > > What would make for-loops easier to present, given this experience? > Thomas Wouters replied: > A simpler version of > > for x in range(len(sequence)): > > obviously :) (a.k.a. 'indexing for') One that gets taught *before* 'if x in > sequence', preferably. Syntax that stands out against 'x in sequence', but > makes 'x in sequence' seem very logical if encountered after the first > syntax. Something like > > for x over sequence: > for x in 0 .. sequence: > for each number x in sequence: Greg Wilson observes: Maybe we're lucky that range literals didn't make it into the language after all (and I say this as someone who asked for them). If we were using range literals to iterate over sequences by index: for x in [0:len(seq)]: it'd be very hard to unify index-based iteration over all collection types ('cuz there's no way to write a "range literal" for the keys in a dict). I don't like "for x over sequence" --- trying to teach students that "in" means "the elements of the sequence", but "over" means "the indices of the sequence" will be hard. Something like "for x indexing sequence" would work (very hard to mistake its meaning), but what would you do for (index,value) iteration? But hey, at least we're better off than Ruby, where ".." and "..." (double or triple ellipsis) mean "up to but not including", and "up to and including" respectively. Or maybe it's the other way around :-). Greg From akuchlin at cnri.reston.va.us Tue Feb 6 21:31:29 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Tue, 6 Feb 2001 15:31:29 -0500 Subject: [Python-Dev] fp vs. fd In-Reply-To: <20010206141446.A11212@thyrsus.com>; from esr@thyrsus.com on Tue, Feb 06, 2001 at 02:14:46PM -0500 References: <200102061906.f16J60x11156@snark.thyrsus.com> 
                              
                              <20010206141446.A11212@thyrsus.com> Message-ID: <20010206153129.B1154@thrak.cnri.reston.va.us> On Tue, Feb 06, 2001 at 02:14:46PM -0500, Eric S. Raymond wrote: >See the fctl() module. I thought this was also true of select() and >poll(), but I see the docs on this are different than the last time I >looked and conclude that either docs or code or both have changed. I think poll() and select() are happy with either an integer or an object that has a .fileno() method returning an integer, thanks to the PyObject_AsFileDescriptor() function in the C API that I added a while ago. Probably the fcntl module should also be changed to use PyObject_AsFileDescriptor() instead of requiring only an int. File a bug on SourceForge so this doesn't get forgotten before 2.1final; this is a minor tidying that's worth doing. --amk From skip at mojam.com Tue Feb 6 21:39:15 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 6 Feb 2001 14:39:15 -0600 (CST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <20010206205253.F9551@xs4all.nl> References: 
                              
                              
                              <20010206205253.F9551@xs4all.nl> Message-ID: <14976.24819.658169.82488@beluga.mojam.com> Thomas> for x in 0 .. sequence: You meant for x in 0 .. len(sequence): right? Skip From martin at loewis.home.cs.tu-berlin.de Tue Feb 6 22:00:59 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Tue, 6 Feb 2001 22:00:59 +0100 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model In-Reply-To: <3A801E49.F8DF70E2@ActiveState.com> (message from Paul Prescod on Tue, 06 Feb 2001 07:54:49 -0800) References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> <3A8013BA.2FF93E8B@lemburg.com> <3A801E49.F8DF70E2@ActiveState.com> Message-ID: <200102062100.f16L0xm01175@mira.informatik.hu-berlin.de> > If we simply allowed string objects to support higher character > numbers I *cannot see* how that could break existing code. To take a specific example: What would you change about imp and py_compile.py? What is the type of imp.get_magic()? If character string, what about this fragment? import imp MAGIC = imp.get_magic() def wr_long(f, x): """Internal; write a 32-bit int to a file in little-endian order.""" f.write(chr( x & 0xff)) f.write(chr((x >> 8) & 0xff)) f.write(chr((x >> 16) & 0xff)) f.write(chr((x >> 24) & 0xff)) ... fc = open(cfile, 'wb') fc.write('\0\0\0\0') wr_long(fc, timestamp) fc.write(MAGIC) Would that continue to write the same file that the current version writes? > We are just making life harder for ourselves by walking further and > further down one path when "everyone agrees" that we are eventually > going to end up on another path. I think a problem of discussing on a theoretical level is that the impact of changes is not clear. You seem to claim that you want changes that have zero impact on existing programs. Can you provide a patch implementing these changes, so that others can experiment and find out whether their application would break? Regards, Martin From thomas at xs4all.net Tue Feb 6 22:28:10 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 22:28:10 +0100 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <14976.24819.658169.82488@beluga.mojam.com>; from skip@mojam.com on Tue, Feb 06, 2001 at 02:39:15PM -0600 References: 
                              
                              
                              <20010206205253.F9551@xs4all.nl> <14976.24819.658169.82488@beluga.mojam.com> Message-ID: <20010206222810.N9474@xs4all.nl> On Tue, Feb 06, 2001 at 02:39:15PM -0600, Skip Montanaro wrote: > Thomas> for x in 0 .. sequence: > You meant > for x in 0 .. len(sequence): > right? Yes and no. Yes, I know '0 .. sequence' can't really work. But that doesn't mean I don't think the one without len() might be pref'rble over the other one :) They were all just examples, anyway. All this talk about syntax and what is best makes me feel like Fredrik: old and grumpy 
                              
                              . Time-for-my-medication-;)-ly y'rs, -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From martin at loewis.home.cs.tu-berlin.de Tue Feb 6 22:50:39 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Tue, 6 Feb 2001 22:50:39 +0100 Subject: [Python-Dev] PEPS, version control, release intervals Message-ID: <200102062150.f16LodG01662@mira.informatik.hu-berlin.de> > A more critical issue might be why people haven't adopted 2.0 yet; > there seems little reason is there to continue using 1.5.2, yet I > still see questions on the XML-SIG, for example, from people who > haven't upgraded. Is it that Zope doesn't support it? Or that Red > Hat and Debian don't include it? Availability of Linux binaries is certainly an issue. On xml-sig, one Linux distributor (I forgot whether SuSE or Redhat) mentioned that they won't include 2.0 in their current major release series (7.x for both). Furthermore, the available 2.0 binaries won't work for either Redhat 7.0 nor SuSE 7.0; I think collecting binaries as we did for earlier releases is an important activity that was forgotten during 2.0. In addition, many packages are still not available for 2.0. Zope is only one of them; gtk, Qt, etc packages are still struggling with Unicode support. omniORBpy has #include 
                              
                              in their sources, ILU does not compile on 2.0 (due to wrong tests involving the PY_MAJOR/MINOR roll-over), Fnorb falls into the select.bind parameter change pitfall. This list probably could be continued - I'm sure many of the maintainers of these packages would appreciate a helping hand from some Python Guru. Regards, Martin From akuchlin at cnri.reston.va.us Wed Feb 7 00:07:23 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Tue, 6 Feb 2001 18:07:23 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules socketmodule.c,1.135,1.136 In-Reply-To: 
                              
                              ; from akuchling@users.sourceforge.net on Tue, Feb 06, 2001 at 02:58:07PM -0800 References: 
                              
                              Message-ID: <20010206180723.B1269@thrak.cnri.reston.va.us> On Tue, Feb 06, 2001 at 02:58:07PM -0800, A.M. Kuchling wrote: >! if (!PyArg_ParseTuple(args, "s|i:write", &data, &len)) >! if (!PyArg_ParseTuple(args, "s#|i:write", &data, &len)) Hm... actually, this patch isn't correct after all. The |i meant you could specify an optional integer to write out only a partial chunk of the string; why not just slice it? Since the SSL code isn't documented, I'm tempted to just rip out the |i. --amk From thomas at xs4all.net Wed Feb 7 00:09:55 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 7 Feb 2001 00:09:55 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules socketmodule.c,1.135,1.136 In-Reply-To: 
                              
                              ; from akuchling@users.sourceforge.net on Tue, Feb 06, 2001 at 02:58:07PM -0800 References: 
                              
                              Message-ID: <20010207000955.G9551@xs4all.nl> On Tue, Feb 06, 2001 at 02:58:07PM -0800, A.M. Kuchling wrote: > Update of /cvsroot/python/python/dist/src/Modules > In directory usw-pr-cvs1:/tmp/cvs-serv21837 > Modified Files: > socketmodule.c > Log Message: > Patch #103636: Allow writing strings containing null bytes to an SSL socket > Index: socketmodule.c > =================================================================== > RCS file: /cvsroot/python/python/dist/src/Modules/socketmodule.c,v > retrieving revision 1.135 > retrieving revision 1.136 > diff -C2 -r1.135 -r1.136 > *** socketmodule.c 2001/02/02 19:55:17 1.135 > --- socketmodule.c 2001/02/06 22:58:05 1.136 > *************** > *** 2219,2223 **** > size_t len = 0; > > ! if (!PyArg_ParseTuple(args, "s|i:write", &data, &len)) > return NULL; > > --- 2219,2223 ---- > size_t len = 0; > > ! if (!PyArg_ParseTuple(args, "s#|i:write", &data, &len)) > return NULL; This doesn't seem right. The new function needs another 'length' argument (an int), and the smallest of the two should be used. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From paulp at ActiveState.com Wed Feb 7 00:21:38 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Tue, 06 Feb 2001 15:21:38 -0800 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> <3A8013BA.2FF93E8B@lemburg.com> <3A801E49.F8DF70E2@ActiveState.com> <200102062100.f16L0xm01175@mira.informatik.hu-berlin.de> Message-ID: <3A808702.5FF36669@ActiveState.com> Let me say one more thing. Unicode and string types are *already widely interoperable*. You run into problems: a) when you try to convert a character greater than 128. In my opinion this is just a poor design decision that can be easily reversed b) some code does an explicit check for types.StringType which of course is not compatible with types.UnicodeType. This can only be fixed by merging the features of types.StringType and types.UnicodeType so that they can be the same object. This is not as trivial as the other fix in terms of lines of code that must change but conceptually it doesn't seem complicated at all. I think a lot of Unicode interoperability problems would just go away if "a" was fixed... Paul Prescod From martin at loewis.home.cs.tu-berlin.de Wed Feb 7 01:00:11 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Wed, 7 Feb 2001 01:00:11 +0100 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model In-Reply-To: <3A808702.5FF36669@ActiveState.com> (message from Paul Prescod on Tue, 06 Feb 2001 15:21:38 -0800) References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> <3A8013BA.2FF93E8B@lemburg.com> <3A801E49.F8DF70E2@ActiveState.com> <200102062100.f16L0xm01175@mira.informatik.hu-berlin.de> <3A808702.5FF36669@ActiveState.com> Message-ID: <200102070000.f1700BV02437@mira.informatik.hu-berlin.de> > a) when you try to convert a character greater than 128. In my opinion > this is just a poor design decision that can be easily reversed Technically, you can easily convert expand it to 256; not that easily beyond. Then, people who put KOI8-R into their Python source code will complain why the strings come out incorrectly, even though they set their language to Russion, and even though it worked that way in earlier Python versions. Or, if they then tag their sources as KOI8-R, writing strings to a "plain" file will fail, as they have characters > 256 in the string. > I think a lot of Unicode interoperability problems would just go > away if "a" was fixed... No, that would be just open a new can of worms. Again, provide a specific patch, and I can tell you specific problems. Regards, Martin From trentm at ActiveState.com Wed Feb 7 02:32:34 2001 From: trentm at ActiveState.com (Trent Mick) Date: Tue, 6 Feb 2001 17:32:34 -0800 Subject: [Python-Dev] Quick Unix work needed In-Reply-To: <3A7AA340.B3AFF106@lemburg.com>; from mal@lemburg.com on Fri, Feb 02, 2001 at 01:08:32PM +0100 References: 
                              
                              <3A7AA340.B3AFF106@lemburg.com> Message-ID: <20010206173234.X25935@ActiveState.com> On Fri, Feb 02, 2001 at 01:08:32PM +0100, M . -A . Lemburg wrote: > Tim Peters wrote: > > > > Trent Mick's C API testing framework has been checked in, along with > > everything needed to get it working on Windows: > > > > http://sourceforge.net/patch/?func=detailpatch&patch_id=101162& > > group_id=5470 > > > > It still needs someone to add it to the Unixish builds. > > Done. Thanks, Marc-Andre! > > > You'll know that it worked if the new std test test_capi.py succeeds. > > The test passes just fine... nothing much there which could fail ;-) Granted there aren't any really useful tests in there yet but that test_config test would have helped me when I started the Win64 port to point out that config.h had to be changed to update SIZEOF_VOID_P. Or something like that. I have some other tests in my source tree that I should be able to add sometime. We can now test some of the marshalling API (which GregS and Tim and I discussed a lot a few months back but did not completely clean up yet). Trent -- Trent Mick TrentM at ActiveState.com From paulp at ActiveState.com Wed Feb 7 03:54:08 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Tue, 06 Feb 2001 18:54:08 -0800 Subject: [Python-Dev] unichr Message-ID: <3A80B8D0.381BD92C@ActiveState.com> Does anyone have an example of real code that would break if unichr and chr were merged? chr would return a regular string if possible and a Unicode string otherwise. When the two string types are merged, there would be no need to deprecate unichr as redundant. Paul Prescod From fredrik at pythonware.com Wed Feb 7 11:00:03 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 7 Feb 2001 11:00:03 +0100 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> <3A8013BA.2FF93E8B@lemburg.com> <3A801E49.F8DF70E2@ActiveState.com> <200102062100.f16L0xm01175@mira.informatik.hu-berlin.de> Message-ID: <00cf01c090ec$c4eb7220$0900a8c0@SPIFF> martin wrote: > To take a specific example: What would you change about imp and > py_compile.py? What is the type of imp.get_magic()? If character > string, what about this fragment? > > import imp > MAGIC = imp.get_magic() > > def wr_long(f, x): > """Internal; write a 32-bit int to a file in little-endian order.""" > f.write(chr( x & 0xff)) > f.write(chr((x >> 8) & 0xff)) > f.write(chr((x >> 16) & 0xff)) > f.write(chr((x >> 24) & 0xff)) > ... > fc = open(cfile, 'wb') > fc.write('\0\0\0\0') > wr_long(fc, timestamp) > fc.write(MAGIC) > > Would that continue to write the same file that the current version > writes? yes (file opened in binary mode, no encoding, no code points above 255) Cheers /F From nhodgson at bigpond.net.au Wed Feb 7 12:44:36 2001 From: nhodgson at bigpond.net.au (Neil Hodgson) Date: Wed, 7 Feb 2001 22:44:36 +1100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> Message-ID: <084e01c090fb$58aa9820$8119fea9@neil> [Paul Prescod discusses Unicode enhancements to Python] Another approach being pursued, mostly in Japan, is Multilingualization (M17N), http://www.m17n.org/ This is supported by the appropriate government department (MITI) and is being worked on in some open source projects, most notably Ruby. For some messages from Yukihiro Matsumoto search deja for M17N in comp.lang.ruby. Matz: "We don't believe there can be any single characer-encoding that encompasses all the world's languages. We want to handle multiple encodings at the same time (if you want to)." The approach taken in the next version of Ruby is for all string and regex objects to have an encoding attribute and for there to be infrastructure to handle operations that combine encodings. One of the things that is needed in a project that tries to fulfill the needs of large character set users is to have some of those users involved in the process. When I first saw proposals to use Unicode in products at Reuters back in 1994, it looked to me (and the proposal originators) as if it could do everything anyone ever needed. It was only after strenuous and persistant argument from the Japanese and Hong Kong offices that it became apparent that Unicode just wasn't enough. A partial solution then was to include language IDs encoded in the Private Use Area. This was still being discussed when I left but while it went some way to satisfying needs, there was still some unhappiness. If Python could cooperate with Ruby here, then not only could code be shared but Python would gain access to developers with large character set /needs/ and experience. Neil From fredrik at pythonware.com Wed Feb 7 12:58:42 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 7 Feb 2001 12:58:42 +0100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <084e01c090fb$58aa9820$8119fea9@neil> Message-ID: <01a401c090fd$5165b700$0900a8c0@SPIFF> Neil Hodgson wrote: > Matz: "We don't believe there can be any single characer-encoding that > encompasses all the world's languages. We want to handle multiple encodings > at the same time (if you want to)." neither does the unicode designers, of course: the point is that unicode only deals with glyphs, not languages. most existing japanese encodings also include language info, and if you don't understand the difference, it's easy to think that unicode sucks... I'd say we need support for *languages*, not more internal encodings. Cheers /F From mal at lemburg.com Wed Feb 7 13:23:50 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed, 07 Feb 2001 13:23:50 +0100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <084e01c090fb$58aa9820$8119fea9@neil> <01a401c090fd$5165b700$0900a8c0@SPIFF> Message-ID: <3A813E56.1EE782DD@lemburg.com> Fredrik Lundh wrote: > > Neil Hodgson wrote: > > Matz: "We don't believe there can be any single characer-encoding that > > encompasses all the world's languages. We want to handle multiple encodings > > at the same time (if you want to)." > > neither does the unicode designers, of course: the point > is that unicode only deals with glyphs, not languages. > > most existing japanese encodings also include language info, > and if you don't understand the difference, it's easy to think > that unicode sucks... > > I'd say we need support for *languages*, not more internal > encodings. >>> print "Hello World!".encode('ascii','German') Hallo Welt! Nice thought ;-) Seriously, do you think that these issues are solvable at the programming language level ? I think that the information needed to fully support language specific notations is much too complicated to go into the Python core. This should be left to applications and add-on packages to figure out. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Wed Feb 7 14:06:40 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed, 07 Feb 2001 14:06:40 +0100 Subject: [Python-Dev] PEPS, version control, release intervals References: <200102062150.f16LodG01662@mira.informatik.hu-berlin.de> Message-ID: <3A814860.69640E7C@lemburg.com> "Martin v. Loewis" wrote: > > > A more critical issue might be why people haven't adopted 2.0 yet; > > there seems little reason is there to continue using 1.5.2, yet I > > still see questions on the XML-SIG, for example, from people who > > haven't upgraded. Is it that Zope doesn't support it? Or that Red > > Hat and Debian don't include it? > > Availability of Linux binaries is certainly an issue. On xml-sig, one > Linux distributor (I forgot whether SuSE or Redhat) mentioned that > they won't include 2.0 in their current major release series (7.x for > both). > > Furthermore, the available 2.0 binaries won't work for either Redhat > 7.0 nor SuSE 7.0; I think collecting binaries as we did for earlier > releases is an important activity that was forgotten during 2.0. > > In addition, many packages are still not available for 2.0. Zope is > only one of them; gtk, Qt, etc packages are still struggling with > Unicode support. omniORBpy has #include 
                              
                              in their > sources, ILU does not compile on 2.0 (due to wrong tests involving the > PY_MAJOR/MINOR roll-over), Fnorb falls into the select.bind parameter > change pitfall. This list probably could be continued - I'm sure many > of the maintainers of these packages would appreciate a helping hand > from some Python Guru. Does this mean that doing CORBA et al. with Python 2.0 is currently not possible ? I will have a need for this starting this summer (along with SOAP and XML), so I'd be willing to help out. Who should I contact ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Wed Feb 7 16:32:29 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed, 07 Feb 2001 16:32:29 +0100 Subject: [Python-Dev] New benchmark results (2.1a2 vs. 2.0) Message-ID: <3A816A8D.38990044@lemburg.com> I reran the benchmark I posted a couple of days ago against the current CVS tree. Here are the results (this time I double checked that both version were compiled using the same compiler settings) on my AMD K6 (I gave back the AMK K6 to Andrew :-). This time I ran the benchmark with Python in -O mode which should give better performance characteristics: PYBENCH 0.8 Benchmark: tmp/pybench-2.1a2-O.pyb (rounds=10, warp=20) Tests: per run per oper. diff * ------------------------------------------------------------------------ BuiltinFunctionCalls: 1080.60 ms 8.48 us +7.91% BuiltinMethodLookup: 1185.60 ms 2.26 us +47.86% ConcatStrings: 1157.75 ms 7.72 us +10.03% ConcatUnicode: 1398.80 ms 9.33 us +8.76% CreateInstances: 1694.30 ms 40.34 us +12.08% CreateStringsWithConcat: 1393.90 ms 6.97 us +9.75% CreateUnicodeWithConcat: 1487.90 ms 7.44 us +7.81% DictCreation: 1794.45 ms 11.96 us +4.22% DictWithFloatKeys: 2102.75 ms 3.50 us +18.03% DictWithIntegerKeys: 1107.80 ms 1.85 us +13.33% DictWithStringKeys: 892.80 ms 1.49 us -2.39% ForLoops: 1145.95 ms 114.59 us -0.00% IfThenElse: 1229.60 ms 1.82 us +15.67% ListSlicing: 551.75 ms 157.64 us +2.23% NestedForLoops: 649.65 ms 1.86 us -0.60% NormalClassAttribute: 1253.35 ms 2.09 us +29.57% NormalInstanceAttribute: 1394.25 ms 2.32 us +51.52% PythonFunctionCalls: 942.45 ms 5.71 us -10.22% PythonMethodCalls: 975.30 ms 13.00 us +14.33% Recursion: 770.35 ms 61.63 us -0.42% SecondImport: 855.50 ms 34.22 us -1.37% SecondPackageImport: 869.40 ms 34.78 us -2.56% SecondSubmoduleImport: 1075.40 ms 43.02 us -3.93% SimpleComplexArithmetic: 1632.95 ms 7.42 us +7.04% SimpleDictManipulation: 1018.15 ms 3.39 us +11.44% SimpleFloatArithmetic: 782.25 ms 1.42 us +0.49% SimpleIntFloatArithmetic: 770.70 ms 1.17 us +0.93% SimpleIntegerArithmetic: 769.85 ms 1.17 us +0.82% SimpleListManipulation: 1097.35 ms 4.06 us +13.16% SimpleLongArithmetic: 1274.80 ms 7.73 us +8.27% SmallLists: 1982.30 ms 7.77 us +5.20% SmallTuples: 1259.90 ms 5.25 us +3.87% SpecialClassAttribute: 1265.35 ms 2.11 us +33.74% SpecialInstanceAttribute: 1694.35 ms 2.82 us +51.38% StringMappings: 1483.15 ms 11.77 us +8.04% StringPredicates: 1205.05 ms 4.30 us -4.89% StringSlicing: 1158.00 ms 6.62 us +12.65% TryExcept: 1128.70 ms 0.75 us -1.22% TryRaiseExcept: 1199.50 ms 79.97 us +6.45% TupleSlicing: 971.40 ms 9.25 us +10.99% UnicodeMappings: 1111.15 ms 61.73 us -2.04% UnicodePredicates: 1307.20 ms 5.81 us -7.54% UnicodeProperties: 1228.05 ms 6.14 us +8.81% UnicodeSlicing: 1032.95 ms 5.90 us -7.52% ------------------------------------------------------------------------ Average round time: 59476.00 ms +6.18% *) measured against: tmp/pybench-2.0-O.pyb (rounds=10, warp=20) The version 0.8 pybench archive can be downloaded from: http://www.lemburg.com/python/pybench-0.8.zip It includes two new test for special dictionary keys. What's interesting here is that attribute lookups seem to have suffered (I consider figures above ~10% to be significant) while Python function calls got faster. The new dictionary key tests nicely show the effect of the string optimization compared to the standard lookup scheme which applies lots of error checking. OTOH, it is surprising that attribute lookup got a slowdown since these normally are string lookups in dictionaries... -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From ping at lfw.org Wed Feb 7 17:12:33 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Wed, 7 Feb 2001 08:12:33 -0800 (PST) Subject: [Python-Dev] unichr In-Reply-To: <3A80B8D0.381BD92C@ActiveState.com> Message-ID: 
                              
                              On Tue, 6 Feb 2001, Paul Prescod wrote: > Does anyone have an example of real code that would break if unichr and > chr were merged? chr would return a regular string if possible and a > Unicode string otherwise. When the two string types are merged, there > would be no need to deprecate unichr as redundant. At the moment, since the default encoding is ASCII, something like u"abc" + chr(200) would cause an exception because 200 is outside of the ASCII range. So if unichr and chr were merged right now as you suggest, u"abc" + unichr(200) would break: unichr(200) would have to return '\xc8' (not u'\xc8') for compatibility with chr(200), yet the concatenation would fail. You can see that any argument from 128 to 255 would cause this problem, since it would be outside the definitely-8-bit range and also outside the definitely-Unicode range. -- ?!ng From guido at digicool.com Wed Feb 7 08:39:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 07 Feb 2001 02:39:11 -0500 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: Your message of "Tue, 06 Feb 2001 10:48:15 +0200." <20010206084815.E63E5A840@darjeeling.zadka.site.co.il> References: <20010205170340.A3101@thyrsus.com>, <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010206084815.E63E5A840@darjeeling.zadka.site.co.il> Message-ID: <200102070739.CAA07014@cj20424-a.reston1.va.home.com> > That's how woody works now, and the binaries are called python and python2. The binaries should be called python1.5 and python2.0, and python should be a symlink to whatever is the default one. This is how the standard "make install" works, and it makes it possible for scripts to require a specific version by specifying e.g. #! /usr/bin/env python1.5 at the top. --Guido van Rossum (home page: http://www.python.org/~guido/) From moshez at zadka.site.co.il Wed Feb 7 20:54:42 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Wed, 7 Feb 2001 21:54:42 +0200 (IST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <200102070739.CAA07014@cj20424-a.reston1.va.home.com> References: <200102070739.CAA07014@cj20424-a.reston1.va.home.com>, <20010205170340.A3101@thyrsus.com>, <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010206084815.E63E5A840@darjeeling.zadka.site.co.il> Message-ID: <20010207195442.290E2A840@darjeeling.zadka.site.co.il> On Wed, 07 Feb 2001 02:39:11 -0500, Guido van Rossum 
                              
                              wrote: > The binaries should be called python1.5 and python2.0, and python > should be a symlink to whatever is the default one. No they shouldn't. Joey Hess wrote to debian-python about the problems such a scheme caused when Perl5.005 and Perl 5.6 tried to coexist. -- For public key: finger moshez at debian.org | gpg --import 
                              
                              Debian - All the power, without the silly hat. From shaleh at valinux.com Wed Feb 7 21:03:57 2001 From: shaleh at valinux.com (Sean 'Shaleh' Perry) Date: Wed, 07 Feb 2001 12:03:57 -0800 (PST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010207195442.290E2A840@darjeeling.zadka.site.co.il> Message-ID: 
                              
                              On 07-Feb-2001 Moshe Zadka wrote: > On Wed, 07 Feb 2001 02:39:11 -0500, Guido van Rossum 
                              
                              > wrote: >> The binaries should be called python1.5 and python2.0, and python >> should be a symlink to whatever is the default one. > > No they shouldn't. Joey Hess wrote to debian-python about the problems > such a scheme caused when Perl5.005 and Perl 5.6 tried to coexist. Guido, the problem lies in we have no default. The user may install only 2.x or 1.5. Scripts that handle the symlink can fail and then the user is left without a python. In the case where only one is installed, this is easy. however in a packaged system where any number of pythons could exist, problems arise. Now, the problem with perl was a bad one because the thing in charge of the symlink was itself a perl script. From bsass at freenet.edmonton.ab.ca Wed Feb 7 21:10:38 2001 From: bsass at freenet.edmonton.ab.ca (Bruce Sass) Date: Wed, 7 Feb 2001 13:10:38 -0700 (MST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010207195442.290E2A840@darjeeling.zadka.site.co.il> Message-ID: 
                              
                              On Wed, 7 Feb 2001, Moshe Zadka wrote: > On Wed, 07 Feb 2001 02:39:11 -0500, Guido van Rossum 
                              
                              wrote: > > The binaries should be called python1.5 and python2.0, and python > > should be a symlink to whatever is the default one. > > No they shouldn't. Joey Hess wrote to debian-python about the problems > such a scheme caused when Perl5.005 and Perl 5.6 tried to coexist. Maybe that needs to be explained again, in real simple terms. My understanding is that it was a problem with the programs not properly identifying which version of Perl they need, if any. - Bruce From guido at digicool.com Wed Feb 7 09:36:56 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 07 Feb 2001 03:36:56 -0500 Subject: [Python-Dev] fp vs. fd In-Reply-To: Your message of "Tue, 06 Feb 2001 14:06:00 EST." <200102061906.f16J60x11156@snark.thyrsus.com> References: <200102061906.f16J60x11156@snark.thyrsus.com> Message-ID: <200102070836.DAA08865@cj20424-a.reston1.va.home.com> > There are a number of places in the Python library that require a > numeric file descriptor, rather than a file object. This complicates > code slightly and (IMO) breaches the wrapper around the file-object > abstraction (which Guido says is only supposed to depend on > stdio-level stuff). > > Are there design reasons for this, or is it historical accident? > > If the latter, I'll go through and fix these to accept either an fd > or an fp. And fix the docs, too. I don't see why this violates abstraction. Take e.g. select. Sometimes you have opened a low-level filedescriptor, e.g. with os.open() or os.pipe(). So it clearly must take an integer fd. Sometimes you have an object at hand that has a fileno() method, e.g. a socket. It would be a waste of time to have to maintain a mapping from integer fd to object in the app, so it's useful to take an object with a fileno() method. There's no problem with knowing that on some (most) platforms, standard files have an underlying implementation using integer fds, and using this in some apps. That's not to say that Python should offer standar APIs that *require* having such an implementation. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Wed Feb 7 09:41:47 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 07 Feb 2001 03:41:47 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules socketmodule.c,1.135,1.136 In-Reply-To: Your message of "Tue, 06 Feb 2001 18:07:23 EST." <20010206180723.B1269@thrak.cnri.reston.va.us> References: 
                              
                              <20010206180723.B1269@thrak.cnri.reston.va.us> Message-ID: <200102070841.DAA08929@cj20424-a.reston1.va.home.com> > On Tue, Feb 06, 2001 at 02:58:07PM -0800, A.M. Kuchling wrote: > >! if (!PyArg_ParseTuple(args, "s|i:write", &data, &len)) > >! if (!PyArg_ParseTuple(args, "s#|i:write", &data, &len)) > > Hm... actually, this patch isn't correct after all. The |i meant you > could specify an optional integer to write out only a partial chunk of > the string; why not just slice it? Since the SSL code isn't > documented, I'm tempted to just rip out the |i. Yes, rip it out. The old API was poorly designed, and let you do bad things (e.g. pass a length much larger than len(s)). --Guido van Rossum (home page: http://www.python.org/~guido/) From paulp at ActiveState.com Wed Feb 7 21:49:15 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Wed, 07 Feb 2001 12:49:15 -0800 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <084e01c090fb$58aa9820$8119fea9@neil> Message-ID: <3A81B4CB.DDA4E304@ActiveState.com> Neil Hodgson wrote: > > ... > > Matz: "We don't believe there can be any single characer-encoding that > encompasses all the world's languages. We want to handle multiple encodings > at the same time (if you want to)." > > The approach taken in the next version of Ruby is for all string and > regex objects to have an encoding attribute and for there to be > infrastructure to handle operations that combine encodings. I think Python should support as many encodings as people invent. Conceptually it doesn't cost me anything, but I'll leave the implementation to you. :) But an encoding is only a way of *representing a character in memory or on disk*. Asking for Python to support multiple encodings in memory is like asking for it to support both two's complement and one's complement long integers. Multiple encodings can be only interesting as a performance issue because the encoding of memory is *transparent* to the *Python programmer*. We could support a thousand encodings internally but a Python programmer should never know or care which one they are dealing with. Which leads me to ask "what's the point"? Would the small performance gains be worth it? > One of the things that is needed in a project that tries to fulfill the > needs of large character set users is to have some of those users involved > in the process. When I first saw proposals to use Unicode in products at > Reuters back in 1994, it looked to me (and the proposal originators) as if > it could do everything anyone ever needed. It was only after strenuous and > persistant argument from the Japanese and Hong Kong offices that it became > apparent that Unicode just wasn't enough. A partial solution then was to > include language IDs encoded in the Private Use Area. This was still being > discussed when I left but while it went some way to satisfying needs, there > was still some unhappiness. I think that Unicode has changed quite a bit since 1994. Nevertheless, language IDs is a fine solution. Unicode is not about distinguishing between languages -- only characters. There is no better "non-Unicode" solution that I've ever heard of. > If Python could cooperate with Ruby here, then not only could code be > shared but Python would gain access to developers with large character set > /needs/ and experience. I don't see how we could meaningfully cooperate on such a core language issue. We could of course share codecs but that has nothing to do with Python's internal representation. Paul Prescod From akuchlin at cnri.reston.va.us Wed Feb 7 22:00:02 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Wed, 7 Feb 2001 16:00:02 -0500 Subject: [Python-Dev] Pre-PEP: Python Character Model In-Reply-To: <3A81B4CB.DDA4E304@ActiveState.com>; from paulp@ActiveState.com on Wed, Feb 07, 2001 at 12:49:15PM -0800 References: <3A7F9084.509510B8@ActiveState.com> <084e01c090fb$58aa9820$8119fea9@neil> <3A81B4CB.DDA4E304@ActiveState.com> Message-ID: <20010207160002.A2123@thrak.cnri.reston.va.us> On Wed, Feb 07, 2001 at 12:49:15PM -0800, Paul Prescod quoted: >> The approach taken in the next version of Ruby is for all string and >> regex objects to have an encoding attribute and for there to be >> infrastructure to handle operations that combine encodings. Any idea if this next version of Ruby is available in its current state, or if it's vaporware? It might be worth looking at what exactly it implements, but I wonder if this is just Matz's idea and he hasn't yet tried implementing it. >We could support a thousand encodings internally but a Python programmer >should never know or care which one they are dealing with. Which leads >me to ask "what's the point"? Would the small performance gains be worth >it? I'd worry that implementing a regex engine for multiple encodings would be impossible or, if possible, it would be quite slow because you'd need to abstract every single character retrieval into a function call that decodes a single character for a given encoding. Massive surgery was required to make Perl handle UTF-8, for example, and I don't know that Perl's engine is actually fully operational with UTF-8 yet. --amk From nhodgson at bigpond.net.au Wed Feb 7 22:37:18 2001 From: nhodgson at bigpond.net.au (Neil Hodgson) Date: Thu, 8 Feb 2001 08:37:18 +1100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <084e01c090fb$58aa9820$8119fea9@neil> <3A81B4CB.DDA4E304@ActiveState.com> <20010207160002.A2123@thrak.cnri.reston.va.us> Message-ID: <03cd01c0914e$30aa7d10$8119fea9@neil> Andrew Kuchling: > Any idea if this next version of Ruby is available in its current > state, or if it's vaporware? It might be worth looking at what > exactly it implements, but I wonder if this is just Matz's idea and he > hasn't yet tried implementing it. AFAIK, 1.7 is still vaporware although the impression that I got was this was being implemented by Matz when he mentioned it in mid December. Some code may be available from CVS but I haven't been following that closely. > I'd worry that implementing a regex engine for multiple encodings > would be impossible or, if possible, it would be quite slow because > you'd need to abstract every single character retrieval into a > function call that decodes a single character for a given encoding. 
                              
                              I'd guess at some sort of type promotion system with caching to avoid extra conversions. Say you want to search a Shift-JIS string for a KOI8 string (unlikely but they do share many characters). The infrastructure checks the character sets representable in the encodings and chooses a super-type that can include all possibilities in the expression, then promotes both arguments by reencoding and performs the operation. The super-type would likely be Unicode based although given Matz' desire for larger-than-Unicode character sets, it may be something else.
                               Neil From andy at reportlab.com Thu Feb 8 00:06:12 2001 From: andy at reportlab.com (Andy Robinson) Date: Wed, 7 Feb 2001 23:06:12 -0000 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model In-Reply-To: <3A801E49.F8DF70E2@ActiveState.com> Message-ID: 
                              
                              > The last time we went around there was an anti-Unicode faction who > argued that adding Unicode support was fine but making it > the default would inconvenience Japanese users. Whoops, I nearly missed the biggest debate of the year! I guess the faction was Brian and I, and our concerns were misunderstood. We can lay this to rest forever now as the current implementation and forward direction incorporate everything I originally hoped for: (1) Frequently you need to work with byte arrays, but need a rich bunch of string-like routines - search and replace, regex etc. This applies both to non-natural-language data and also to the special case of corrupt native encodings that need repair. We loosely defined the 'string interface' in UserString, so that other people could define string-like types if they wished and so that users can expect to find certain methods and operations in both Unicode and Byte Array types. I'd be really happy one day to explicitly type x= ByteArray('some raw data') as long as I had my old friends split, join, find etc. (2) Japanese projects often need small extensions to codecs to deal with user-defined characters. Java and VB give you some canned codecs but no way to extend them. All the Python asian codec drafts involve 'open' code you can hack and use simple dictionaries for mapping tables; so it will be really easy to roll your own "Shift-JIS-plus" with 20 extra characters mapping to a private use area. This will be a huge win over other languages. (3) The Unicode conversion was based on a more general notion of 'stream conversion filters' which work with bytes. This leaves the door open to writing, for example, a direct Shift-JIS-to-EUC filter which adds nothing in the case of clean data but is much more robust in the case of user-defined characters or which can handle cleanup of misencoded data. We could also write image manipulation or crypto codecs. Some of us hope to provide general machinery for fast handling of byte-stream-filters which could be useful in image processing and crypto as well as encodings. This might need an extended or different lookup function (after all, neither end of the filter need be Unicode) but could be cleanly layered on top of the codec mechanism we have built in. (4) I agree 100% on being explicit whenever you do I/O or conversion and on generally using Unicode characters where possible. Defaults are evil. But we needed a compatibility route to get there. Guido has said that long term there will be Unicode strings and Byte Arrays. That's the time to require arguments to open(). > Similarly, we could improve socket objects so that they > have different > readtext/readbinary and writetext/writebinary without unifying the > string objects. There are lots of small changes we can make without > breaking anything. One I would like to see right now is a > unification of > chr() and unichr(). Here's a thought. How about BinaryFile/BinarySocket/ByteArray which do not need an encoding, and File/Socket/String which require explicit encodings on opeening. We keep broad parity between their methods. That seems more straightforward to me than having text/binary methods, and also provides a cleaner upgrade path for existing code. - Andy From skip at mojam.com Thu Feb 8 00:07:16 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 7 Feb 2001 17:07:16 -0600 (CST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010207195442.290E2A840@darjeeling.zadka.site.co.il> References: <200102070739.CAA07014@cj20424-a.reston1.va.home.com> <20010205170340.A3101@thyrsus.com> <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010206084815.E63E5A840@darjeeling.zadka.site.co.il> <20010207195442.290E2A840@darjeeling.zadka.site.co.il> Message-ID: <14977.54564.430670.260975@beluga.mojam.com> Moshe> No they shouldn't. Joey Hess wrote to debian-python about the Moshe> problems such a scheme caused when Perl5.005 and Perl 5.6 tried Moshe> to coexist. Can you summarize or post that message here? I've never had a problem with the scheme that Python currently uses aside from occasionally having the redirect the python symlink after an install. Skip From martin at loewis.home.cs.tu-berlin.de Thu Feb 8 01:06:41 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Thu, 8 Feb 2001 01:06:41 +0100 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <3A814860.69640E7C@lemburg.com> (mal@lemburg.com) References: <200102062150.f16LodG01662@mira.informatik.hu-berlin.de> <3A814860.69640E7C@lemburg.com> Message-ID: <200102080006.f1806fj01504@mira.informatik.hu-berlin.de> > Does this mean that doing CORBA et al. with Python 2.0 is > currently not possible ? It is possible; people have posted patches to Fnorb (which only add tuples in the right places). Also, the omniORB CVS cooperates with Python 2.0. There just is nothing that's officially released. > I will have a need for this starting this summer (along with SOAP > and XML), so I'd be willing to help out. Who should I contact ? Depends on what you want to take as a starting point. For Fnorb, it would be DSTC, although it appears to be "officially unmaintained" for the moment. For omniORB, contact Duncan Grisby - he's usually quite responsive. For ILU, it would be Bill Janssen; I'm sure he'll accept patches. If you need something in a commercial environment (i.e. where purchasing licenses is not an issue), feel free to contact me in private :-) In general, the DO SIG (do-sig at python.org) is a good place to discuss both CORBA and SOAP. Regards, Martin From sdm7g at virginia.edu Thu Feb 8 05:31:50 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Wed, 7 Feb 2001 23:31:50 -0500 (EST) Subject: [Python-Dev] more 2.1a2 macosx build problems Message-ID: 
                              
                              Is anyone else tracking builds on macosx ? A bug I reported [#131170] on the 2.1a2 release has been growing more heads... Initial problem: make install fails as it tries to run ranlib on a shared library: ranlib: file: /usr/local/lib/python2.1/config/libpython2.1.dylib is not an archive commented out that line in the makefile: @if test -d $(LDLIBRARY); then :; else \ $(INSTALL_DATA) $(LDLIBRARY) $(LIBPL)/$(LDLIBRARY) ; \ # $(RANLIB) $(LIBPL)/$(LDLIBRARY) ; \ make and install seem to work, however, if you run python from somewhere other than the build directory, you get a fatal error: dyld: python2.1 can't open library: libpython2.1.dylib (No such file or directory, errno = 2) looking at executable with 'otool -L' shows that while system frameworks have their complete pathnames, libpython2.1.dylib has no path, so it's expected to be in the current directory. Added "-install_name $(LIBPL)/$(LDLIBRARY)" to the libtool command to tell it that it will be installed somewhere other than the current build directory. 'make' fails on setup when python can't find os module. Investigating that, it looks like sys.path is all confused. Looking at Modules/getpath.c, it looks like the WITH_NEXT_FRAMEWORK conditional code is getting the path from the shared library and not the executable. -- Steve Majewski From tim_one at email.msn.com Thu Feb 8 06:24:41 2001 From: tim_one at email.msn.com (Tim Peters) Date: Thu, 8 Feb 2001 00:24:41 -0500 Subject: [Python-Dev] fp vs. fd In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Eric S. Raymond] > There are a number of places in the Python library that require a > numeric file descriptor, rather than a file object. [Ka-Ping Yee] > I'm curious... where? mmap.mmap(fileno, ...) for me most recently, where, usually, it's simply annoying. fresh-on-my-mind-ly y'rs - tim From uche.ogbuji at fourthought.com Thu Feb 8 08:21:55 2001 From: uche.ogbuji at fourthought.com (Uche Ogbuji) Date: Thu, 08 Feb 2001 00:21:55 -0700 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: Message from "Martin v. Loewis" 
                              
                              of "Tue, 06 Feb 2001 22:50:39 +0100." <200102062150.f16LodG01662@mira.informatik.hu-berlin.de> Message-ID: <200102080721.AAA26782@localhost.localdomain> > Availability of Linux binaries is certainly an issue. On xml-sig, one > Linux distributor (I forgot whether SuSE or Redhat) mentioned that > they won't include 2.0 in their current major release series (7.x for > both). 'Twas Red Hat. However, others claim to have spotted Python 2.0 in Rawhide and supposedly both versions might be included until 8.0. > In addition, many packages are still not available for 2.0. Zope is > only one of them; gtk, Qt, etc packages are still struggling with > Unicode support. omniORBpy has #include 
                              
                              in their > sources, I hadn't noticed this. OmniORBpy compiles and runs just fine for me using Python 2.0 and 2.1a2, except that it throws BAD_PARAM when passed Unicode objects in place of strings. -- Uche Ogbuji Principal Consultant uche.ogbuji at fourthought.com +1 303 583 9900 x 101 Fourthought, Inc. http://Fourthought.com 4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA Software-engineering, knowledge-management, XML, CORBA, Linux, Python From uche.ogbuji at fourthought.com Thu Feb 8 08:26:25 2001 From: uche.ogbuji at fourthought.com (Uche Ogbuji) Date: Thu, 08 Feb 2001 00:26:25 -0700 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: Message from "M.-A. Lemburg" 
                              
                              of "Wed, 07 Feb 2001 14:06:40 +0100." <3A814860.69640E7C@lemburg.com> Message-ID: <200102080726.AAA27240@localhost.localdomain> > Does this mean that doing CORBA et al. with Python 2.0 is > currently not possible ? > > I will have a need for this starting this summer (along with SOAP > and XML), so I'd be willing to help out. Who should I contact ? No. You can use OmniORBpy as long as you're careful not to mix your strings with your unicode objects. I don't know the tale of SOAP. soaplib seems stuck at 0.8. Not that I blame anyone: the experience of hacking a subset of SOAP into 4Suite Server left me in a bad mood for days. Someone was tanked when they came up with that. XML is rather an odd man out in your list. Do you mean custom XML over HTTP or somesuch? -- Uche Ogbuji Principal Consultant uche.ogbuji at fourthought.com +1 303 583 9900 x 101 Fourthought, Inc. http://Fourthought.com 4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA Software-engineering, knowledge-management, XML, CORBA, Linux, Python From mal at lemburg.com Thu Feb 8 12:35:22 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 08 Feb 2001 12:35:22 +0100 Subject: [Python-Dev] PEPS, version control, release intervals References: <200102080726.AAA27240@localhost.localdomain> Message-ID: <3A82847A.14496A01@lemburg.com> Uche Ogbuji wrote: > > > Does this mean that doing CORBA et al. with Python 2.0 is > > currently not possible ? > > > > I will have a need for this starting this summer (along with SOAP > > and XML), so I'd be willing to help out. Who should I contact ? > > No. You can use OmniORBpy as long as you're careful not to mix your strings > with your unicode objects. Good news :-) Thanks. > I don't know the tale of SOAP. soaplib seems stuck at 0.8. Not that I blame > anyone: the experience of hacking a subset of SOAP into 4Suite Server left me > in a bad mood for days. Someone was tanked when they came up with that. > > XML is rather an odd man out in your list. Do you mean custom XML over HTTP > or somesuch? Well, for one SOAP is XML-based and I am planning to add full XML support to our application server this summer (still waiting for the dust to settle :-). The reason for trying to support SOAP is that some very important legacy system vendors (e.g. SAP) are moving into this direction. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Thu Feb 8 13:53:57 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 08 Feb 2001 13:53:57 +0100 Subject: [Python-Dev] PEPS, version control, release intervals References: <200102062150.f16LodG01662@mira.informatik.hu-berlin.de> <3A814860.69640E7C@lemburg.com> <200102080006.f1806fj01504@mira.informatik.hu-berlin.de> Message-ID: <3A8296E5.C7853746@lemburg.com> "Martin v. Loewis" wrote: > > > Does this mean that doing CORBA et al. with Python 2.0 is > > currently not possible ? > > It is possible; people have posted patches to Fnorb (which only add > tuples in the right places). Also, the omniORB CVS cooperates with > Python 2.0. There just is nothing that's officially released. Looks like this is another issue with the current pace at which Python releases appear. I am starting to get these problems too with my mx tools: people download the wrong version and then find that the tools don't work with their installed version of Python (on Windows that is). Luckily, distutils makes this easier to handle, but many of the tools out there still don't use it. > > I will have a need for this starting this summer (along with SOAP > > and XML), so I'd be willing to help out. Who should I contact ? > > Depends on what you want to take as a starting point. For Fnorb, it > would be DSTC, although it appears to be "officially unmaintained" for > the moment. For omniORB, contact Duncan Grisby - he's usually quite > responsive. For ILU, it would be Bill Janssen; I'm sure he'll accept > patches. If you need something in a commercial environment (i.e. where > purchasing licenses is not an issue), feel free to contact me in > private :-) Depends on the licensing costs, but yes, this is for a commercial product ;-) > In general, the DO SIG (do-sig at python.org) is a good place to discuss > both CORBA and SOAP. Thank you for the details. I'll sign up to that SIG as well (that should get me to 300 emails a day :-/). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From Barrett at stsci.edu Thu Feb 8 23:43:37 2001 From: Barrett at stsci.edu (Paul Barrett) Date: Thu, 8 Feb 2001 17:43:37 -0500 (EST) Subject: [Python-Dev] PEP 209: Multi-dimensional Arrays Message-ID: <14979.7675.800077.147879@nem-srvr.stsci.edu> The first draft of PEP 209: Multi-dimensional Arrays is ready for comment. It's primary emphasis is aimed at array operations, but its design is intended to provide a general framework for working with multi-dimensional arrays. This PEP covers a lot of ground and so does not go into much detail at this stage. The hope is that we can fill them in as time goes on. It also presents several Open Issues that need to be discussed. Cheers, Paul P.S. - Sorry Paul (Dubois). We couldn't wait any longer. -- Dr. Paul Barrett Space Telescope Science Institute Phone: 410-338-4475 ESS/Science Software Group FAX: 410-338-4767 Baltimore, MD 21218 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PEP: 209 Title: Multi-dimensional Arrays Version: Author: barrett at stsci.edu (Paul Barrett), oliphant at ee.byu.edu (Travis Oliphant) Python-Version: 2.2 Status: Draft Type: Standards Track Created: 03-Jan-2001 Post-History: Abstract This PEP proposes a redesign and re-implementation of the multi- dimensional array module, Numeric, to make it easier to add new features and functionality to the module. Aspects of Numeric 2 that will receive special attention are efficient access to arrays exceeding a gigabyte in size and composed of inhomogeneous data structures or records. The proposed design uses four Python classes: ArrayType, UFunc, Array, and ArrayView; and a low-level C-extension module, _ufunc, to handle the array operations efficiently. In addition, each array type has its own C-extension module which defines the coercion rules, operations, and methods for that type. This design enables new types, features, and functionality to be added in a modular fashion. The new version will introduce some incompatibilities with the current Numeric. Motivation Multi-dimensional arrays are commonly used to store and manipulate data in science, engineering, and computing. Python currently has an extension module, named Numeric (henceforth called Numeric 1), which provides a satisfactory set of functionality for users manipulating homogeneous arrays of data of moderate size (of order 10 MB). For access to larger arrays (of order 100 MB or more) of possibly inhomogeneous data, the implementation of Numeric 1 is inefficient and cumbersome. In the future, requests by the Numerical Python community for additional functionality is also likely as PEPs 211: Adding New Linear Operators to Python, and 225: Elementwise/Objectwise Operators illustrate. Proposal This proposal recommends a re-design and re-implementation of Numeric 1, henceforth called Numeric 2, which will enable new types, features, and functionality to be added in an easy and modular manner. The initial design of Numeric 2 should focus on providing a generic framework for manipulating arrays of various types and should enable a straightforward mechanism for adding new array types and UFuncs. Functional methods that are more specific to various disciplines can then be layered on top of this core. This new module will still be called Numeric and most of the behavior found in Numeric 1 will be preserved. The proposed design uses four Python classes: ArrayType, UFunc, Array, and ArrayView; and a low-level C-extension module to handle the array operations efficiently. In addition, each array type has its own C-extension module which defines the coercion rules, operations, and methods for that type. At a later date, when core functionality is stable, some Python classes can be converted to C-extension types. Some planned features are: 1. Improved memory usage This feature is particularly important when handling large arrays and can produce significant improvements in performance as well as memory usage. We have identified several areas where memory usage can be improved: a. Use a local coercion model Instead of using Python's global coercion model which creates temporary arrays, Numeric 2, like Numeric 1, will implement a local coercion model as described in PEP 208 which defers the responsibility of coercion to the operator. By using internal buffers, a coercion operation can be done for each array (including output arrays), if necessary, at the time of the operation. Benchmarks [1] have shown that performance is at most degraded only slightly and is improved in cases where the internal buffers are less than the L2 cache size and the processor is under load. To avoid array coercion altogether, C functions having arguments of mixed type are allowed in Numeric 2. b. Avoid creation of temporary arrays In complex array expressions (i.e. having more than one operation), each operation will create a temporary array which will be used and then deleted by the succeeding operation. A better approach would be to identify these temporary arrays and reuse their data buffers when possible, namely when the array shape and type are the same as the temporary array being created. This can be done by checking the temparory array's reference count. If it is 1, then it will be deleted once the operation is done and is a candidate for reuse. c. Optional use of memory-mapped files Numeric users sometimes need to access data from very large files or to handle data that is greater than the available memory. Memory-mapped arrays provide a mechanism to do this by storing the data on disk while making it appear to be in memory. Memory- mapped arrays should improve access to all files by eliminating one of two copy steps during a file access. Numeric should be able to access in-memory and memory-mapped arrays transparently. d. Record access In some fields of science, data is stored in files as binary records. For example in astronomy, photon data is stored as a 1 dimensional list of photons in order of arrival time. These records or C-like structures contain information about the detected photon, such as its arrival time, its position on the detector, and its energy. Each field may be of a different type, such as char, int, or float. Such arrays introduce new issues that must be dealt with, in particular byte alignment or byte swapping may need to be performed for the numeric values to be properly accessed (though byte swapping is also an issue for memory mapped data). Numeric 2 is designed to automatically handle alignment and representational issues when data is accessed or operated on. There are two approaches to implementing records; as either a derived array class or a special array type, depending on your point-of- view. We defer this discussion to the Open Issues section. 2. Additional array types Numeric 1 has 11 defined types: char, ubyte, sbyte, short, int, long, float, double, cfloat, cdouble, and object. There are no ushort, uint, or ulong types, nor are there more complex types such as a bit type which is of use to some fields of science and possibly for implementing masked-arrays. The design of Numeric 1 makes the addition of these and other types a difficult and error-prone process. To enable the easy addition (and deletion) of new array types such as a bit type described below, a re-design of Numeric is necessary. a. Bit type The result of a rich comparison between arrays is an array of boolean values. The result can be stored in an array of type char, but this is an unnecessary waste of memory. A better implementation would use a bit or boolean type, compressing the array size by a factor of eight. This is currently being implemented for Numeric 1 (by Travis Oliphant) and should be included in Numeric 2. 3. Enhanced array indexing syntax The extended slicing syntax was added to Python to provide greater flexibility when manipulating Numeric arrays by allowing step-sizes greater than 1. This syntax works well as a shorthand for a list of regularly spaced indices. For those situations where a list of irregularly spaced indices are needed, an enhanced array indexing syntax would allow 1-D arrays to be arguments. 4. Rich comparisons The implementation of PEP 207: Rich Comparisons in Python 2.1 provides additional flexibility when manipulating arrays. We intend to implement this feature in Numeric 2. 5. Array broadcasting rules When an operation between a scalar and an array is done, the implied behavior is to create a new array having the same shape as the array operand containing the scalar value. This is called array broadcasting. It also works with arrays of lesser rank, such as vectors. This implicit behavior is implemented in Numeric 1 and will also be implemented in Numeric 2. Design and Implementation The design of Numeric 2 has four primary classes: 1. ArrayType: This is a simple class that describes the fundamental properties of an array-type, e.g. its name, its size in bytes, its coercion relations with respect to other types, etc., e.g. > Int32 = ArrayType('Int32', 4, 'doc-string') Its relation to the other types is defined when the C-extension module for that type is imported. The corresponding Python code is: > Int32.astype[Real64] = Real64 This says that the Real64 array-type has higher priority than the Int32 array-type. The following attributes and methods are proposed for the core implementation. Additional attributes can be added on an individual basis, e.g. .bitsize or .bitstrides for the bit type. Attributes: .name: e.g. "Int32", "Float64", etc. .typecode: e.g. 'i', 'f', etc. (for backward compatibility) .size (in bytes): e.g. 4, 8, etc. .array_rules (mapping): rules between array types .pyobj_rules (mapping): rules between array and python types .doc: documentation string Methods: __init__(): initialization __del__(): destruction __repr__(): representation C-API: This still needs to be fleshed-out. 2. UFunc: This class is the heart of Numeric 2. Its design is similar to that of ArrayType in that the UFunc creates a singleton callable object whose attributes are name, total and input number of arguments, a document string, and an empty CFunc dictionary; e.g. > add = UFunc('add', 3, 2, 'doc-string') When defined the add instance has no C functions associated with it and therefore can do no work. The CFunc dictionary is populated or registerd later when the C-extension module for an array-type is imported. The arguments of the regiser method are: function name, function descriptor, and the CUFunc object. The corresponding Python code is > add.register('add', (Int32, Int32, Int32), cfunc-add) In the initialization function of an array type module, e.g. Int32, there are two C API functions: one to initialize the coercion rules and the other to register the CFunc objects. When an operation is applied to some arrays, the __call__ method is invoked. It gets the type of each array (if the output array is not given, it is created from the coercion rules) and checks the CFunc dictionary for a key that matches the argument types. If it exists the operation is performed immediately, otherwise the coercion rules are used to search for a related operation and set of conversion functions. The __call__ method then invokes a compute method written in C to iterate over slices of each array, namely: > _ufunc.compute(slice, data, func, swap, conv) The 'func' argument is a CFuncObject, while the 'swap' and 'conv' arguments are lists of CFuncObjects for those arrays needing pre- or post-processing, otherwise None is used. The data argument is a list of buffer objects, and the slice argument gives the number of iterations for each dimension along with the buffer offset and step size for each array and each dimension. We have predefined several UFuncs for use by the __call__ method: cast, swap, getobj, and setobj. The cast and swap functions do coercion and byte-swapping, resp. and the getobj and setobj functions do coercion between Numeric arrays and Python sequences. The following attributes and methods are proposed for the core implementation. Attributes: .name: e.g. "add", "subtract", etc. .nargs: number of total arguments .iargs: number of input arguments .cfuncs (mapping): the set C functions .doc: documentation string Methods: __init__(): initialization __del__(): destruction __repr__(): representation __call__(): look-up and dispatch method initrule(): initialize coercion rule uninitrule(): uninitialize coercion rule register(): register a CUFunc unregister(): unregister a CUFunc C-API: This still needs to be fleshed-out. 3. Array: This class contains information about the array, such as shape, type, endian-ness of the data, etc.. Its operators, '+', '-', etc. just invoke the corresponding UFunc function, e.g. > def __add__(self, other): > return ufunc.add(self, other) The following attributes, methods, and functions are proposed for the core implementation. Attributes: .shape: shape of the array .format: type of the array .real (only complex): real part of a complex array .imag (only complex): imaginary part of a complex array Methods: __init__(): initialization __del__(): destruction __repr_(): representation __str__(): pretty representation __cmp__(): rich comparison __len__(): __getitem__(): __setitem__(): __getslice__(): __setslice__(): numeric methods: copy(): copy of array aslist(): create list from array asstring(): create string from array Functions: fromlist(): create array from sequence fromstring(): create array from string array(): create array with shape and value concat(): concatenate two arrays resize(): resize array C-API: This still needs to be fleshed-out. 4. ArrayView This class is similar to the Array class except that the reshape and flat methods will raise exceptions, since non-contiguous arrays cannot be reshaped or flattened using just pointer and step-size information. C-API: This still needs to be fleshed-out. 5. C-extension modules: Numeric2 will have several C-extension modules. a. _ufunc: The primary module of this set is the _ufuncmodule.c. The intention of this module is to do the bare minimum, i.e. iterate over arrays using a specified C function. The interface of these functions is the same as Numeric 1, i.e. int (*CFunc)(char *data, int *steps, int repeat, void *func); and their functionality is expected to be the same, i.e. they iterate over the inner-most dimension. The following attributes and methods are proposed for the core implementation. Attibutes: Methods: compute(): C-API: This still needs to be fleshed-out. b. _int32, _real64, etc.: There will also be C-extension modules for each array type, e.g. _int32module.c, _real64module.c, etc. As mentioned previously, when these modules are imported by the UFunc module, they will automatically register their functions and coercion rules. New or improved versions of these modules can be easily implemented and used without affecting the rest of Numeric 2. Open Issues 1. Does slicing syntax default to copy or view behavior? The default behavior of Python is to return a copy of a sub-list or tuple when slicing syntax is used, whereas Numeric 1 returns a view into the array. The choice made for Numeric 1 is apparently for reasons of performance: the developers wish to avoid the penalty of allocating and copying the data buffer during each array operation and feel that the need for a deepcopy of an array to be rare. Yet, some have argued that Numeric's slice notation should also have copy behavior to be consistent with Python lists. In this case the performance penalty associated with copy behavior can be minimized by implementing copy-on-write. This scheme has both arrays sharing one data buffer (as in view behavior) until either array is assigned new data at which point a copy of the data buffer is made. View behavior would then be implemented by an ArrayView class, whose behavior be similar to Numeric 1 arrays, i.e. .shape is not settable for non-contiguous arrays. The use of an ArrayView class also makes explicit what type of data the array contains. 2. Does item syntax default to copy or view behavior? A similar question arises with the item syntax. For example, if a = [[0,1,2], [3,4,5]] and b = a[0], then changing b[0] also changes a[0][0], because a[0] is a reference or view of the first row of a. Therefore, if c is a 2-d array, it would appear that c[i] should return a 1-d array which is a view into, instead of a copy of, c for consistency. Yet, c[i] can be considered just a shorthand for c[i,:] which would imply copy behavior assuming slicing syntax returns a copy. Should Numeric 2 behave the same way as lists and return a view or should it return a copy. 3. How is scalar coercion implemented? Python has fewer numeric types than Numeric which can cause coercion problems. For example when multiplying a Python scalar of type float and a Numeric array of type float, the Numeric array is converted to a double, since the Python float type is actually a double. This is often not the desired behavior, since the Numeric array will be doubled in size which is likely to be annoying, particularly for very large arrays. We prefer that the array type trumps the python type for the same type class, namely integer, float, and complex. Therefore an operation between a Python integer and an Int16 (short) array will return an Int16 array. Whereas an operation between a Python float and an Int16 array would return a Float64 (double) array. Operations between two arrays use normal coercion rules. 4. How is integer division handled? In a future version of Python, the behavior of integer division will change. The operands will be converted to floats, so the result will be a float. If we implement the proposed scalar coercion rules where arrays have precedence over Python scalars, then dividing an array by an integer will return an integer array and will not be consistent with a future version of Python which would return an array of type double. Scientific programmers are familiar with the distinction between integer and float-point division, so should Numeric 2 continue with this behavior? 5. How should records be implemented? There are two approaches to implementing records depending on your point-of-view. The first is two divide arrays into separate classes depending on the behavior of their types. For example numeric arrays are one class, strings a second, and records a third, because the range and type of operations of each class differ. As such, a record array is not a new type, but a mechanism for a more flexible form of array. To easily access and manipulate such complex data, the class is comprised of numeric arrays having different byte offsets into the data buffer. For example, one might have a table consisting of an array of Int16, Real32 values. Two numeric arrays, one with an offset of 0 bytes and a stride of 6 bytes to be interpeted as Int16, and one with an offset of 2 bytes and a stride of 6 bytes to be interpreted as Real32 would represent the record array. Both numeric arrays would refer to the same data buffer, but have different offset and stride attributes, and a different numeric type. The second approach is to consider a record as one of many array types, albeit with fewer, and possibly different, array operations than for numeric arrays. This approach considers an array type to be a mapping of a fixed-length string. The mapping can either be simple, like integer and floating-point numbers, or complex, like a complex number, a byte string, and a C-structure. The record type effectively merges the struct and Numeric modules into a multi-dimensional struct array. This approach implies certain changes to the array interface. For example, the 'typecode' keyword argument should probably be changed to the more descriptive 'format' keyword. a. How are record semantics defined and implemented? Which ever implementation approach is taken for records, the syntax and semantics of how they are to be accessed and manipulated must be decided, if one wishes to have access to sub-fields of records. In this case, the record type can essentially be considered an inhomogeneous list, like a tuple returned by the unpack method of the struct module; and a 1-d array of records may be interpreted as a 2-d array with the second dimension being the index into the list of fields. This enhanced array semantics makes access to an array of one or more of the fields easy and straightforward. It also allows a user to do array operations on a field in a natural and intuitive way. If we assume that records are implemented as an array type, then last dimension defaults to 0 and can therefore be neglected for arrays comprised of simple types, like numeric. 6. How are masked-arrays implemented? Masked-arrays in Numeric 1 are implemented as a separate array class. With the ability to add new array types to Numeric 2, it is possible that masked-arrays in Numeric 2 could be implemented as a new array type instead of an array class. 7. How are numerical errors handled (IEEE floating-point errors in particular)? It is not clear to the proposers (Paul Barrett and Travis Oliphant) what is the best or preferred way of handling errors. Since most of the C functions that do the operation, iterate over the inner-most (last) dimension of the array. This dimension could contain a thousand or more items having one or more errors of differing type, such as divide-by-zero, underflow, and overflow. Additionally, keeping track of these errors may come at the expense of performance. Therefore, we suggest several options: a. Print a message of the most severe error, leaving it to the user to locate the errors. b. Print a message of all errors that occurred and the number of occurrences, leaving it to the user to locate the errors. c. Print a message of all errors that occurred and a list of where they occurred. d. Or use a hybrid approach, printing only the most severe error, yet keeping track of what and where the errors occurred. This would allow the user to locate the errors while keeping the error message brief. 8. What features are needed to ease the integration of FORTRAN libraries and code? It would be a good idea at this stage to consider how to ease the integration of FORTRAN libraries and user code in Numeric 2. Implementation Steps 1. Implement basic UFunc capability a. Minimal Array class: Necessary class attributes and methods, e.g. .shape, .data, .type, etc. b. Minimal ArrayType class: Int32, Real64, Complex64, Char, Object c. Minimall UFunc class: UFunc instantiation, CFunction registration, UFunc call for 1-D arrays including the rules for doing alignment, byte-swapping, and coercion. d. Minimal C-extension module: _UFunc, which does the innermost array loop in C. This step implements whatever is needed to do: 'c = add(a, b)' where a, b, and c are 1-D arrays. It teaches us how to add new UFuncs, to coerce the arrays, to pass the necessary information to a C iterator method and to do the actually computation. 2. Continue enhancing the UFunc iterator and Array class a. Implement some access methods for the Array class: print, repr, getitem, setitem, etc. b. Implement multidimensional arrays c. Implement some of basic Array methods using UFuncs: +, -, *, /, etc. d. Enable UFuncs to use Python sequences. 3. Complete the standard UFunc and Array class behavior a. Implement getslice and setslice behavior b. Work on Array broadcasting rules c. Implement Record type 4. Add additional functionality a. Add more UFuncs b. Implement buffer or mmap access Incompatibilities The following is a list of incompatibilities in behavior between Numeric 1 and Numeric 2. 1. Scalar corcion rules Numeric 1 has single set of coercion rules for array and Python numeric types. This can cause unexpected and annoying problems during the calculation of an array expression. Numeric 2 intends to overcome these problems by having two sets of coercion rules: one for arrays and Python numeric types, and another just for arrays. 2. No savespace attribute The savespace attribute in Numeric 1 makes arrays with this attribute set take precedence over those that do not have it set. Numeric 2 will not have such an attribute and therefore normal array coercion rules will be in effect. 3. Slicing syntax returns a copy The slicing syntax in Numeric 1 returns a view into the original array. The slicing behavior for Numeric 2 will be a copy. You should use the ArrayView class to get a view into an array. 4. Boolean comparisons return a boolean array A comparison between arrays in Numeric 1 results in a Boolean scalar, because of current limitations in Python. The advent of Rich Comparisons in Python 2.1 will allow an array of Booleans to be returned. 5. Type characters are depricated Numeric 2 will have an ArrayType class composed of Type instances, for example Int8, Int16, Int32, and Int for signed integers. The typecode scheme in Numeric 1 will be available for backward compatibility, but will be depricated. Appendices A. Implicit sub-arrays iteration A computer animation is composed of a number of 2-D images or frames of identical shape. By stacking these images into a single block of memory, a 3-D array is created. Yet the operations to be performed are not meant for the entire 3-D array, but on the set of 2-D sub-arrays. In most array languages, each frame has to be extracted, operated on, and then reinserted into the output array using a for-like loop. The J language allows the programmer to perform such operations implicitly by having a rank for the frame and array. By default these ranks will be the same during the creation of the array. It was the intention of the Numeric 1 developers to implement this feature, since it is based on the language J. The Numeric 1 code has the required variables for implementing this behavior, but was never implemented. We intend to implement implicit sub-array iteration in Numeric 2, if the array broadcasting rules found in Numeric 1 do not fully support this behavior. Copyright This document is placed in the public domain. Related PEPs PEP 207: Rich Comparisons by Guido van Rossum and David Ascher PEP 208: Reworking the Coercion Model by Neil Schemenauer and Marc-Andre' Lemburg PEP 211: Adding New Linear Algebra Operators to Python by Greg Wilson PEP 225: Elementwise/Objectwise Operators by Huaiyu Zhu PEP 228: Reworking Python's Numeric Model by Moshe Zadka References [1] P. Greenfield 2000. private communication. From fdrake at acm.org Fri Feb 9 04:51:34 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 8 Feb 2001 22:51:34 -0500 (EST) Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: <20010205141139.K733@thrak.cnri.reston.va.us> References: 
                              
                              <3A7EF1A0.EDA4AD24@lemburg.com> <20010205141139.K733@thrak.cnri.reston.va.us> Message-ID: <14979.26950.415841.24705@cj42289-a.reston1.va.home.com> Andrew Kuchling writes: > * Work on the Batteries Included proposals & required infrastructure I'd certainly like to see some machinery that allows us to incorporate arbitrary distutils-based packages in Python source and binary distributions and have them built, tested, and installed alongside the interpreter core. I think this would be the right approach to deal with many components, including the XML and curses components. -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From moshez at zadka.site.co.il Fri Feb 9 11:35:33 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Fri, 9 Feb 2001 12:35:33 +0200 (IST) Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: <14979.26950.415841.24705@cj42289-a.reston1.va.home.com> References: <14979.26950.415841.24705@cj42289-a.reston1.va.home.com>, 
                              
                              <3A7EF1A0.EDA4AD24@lemburg.com> <20010205141139.K733@thrak.cnri.reston.va.us> Message-ID: <20010209103533.E7EA3A840@darjeeling.zadka.site.co.il> On Thu, 8 Feb 2001, "Fred L. Drake, Jr." 
                              
                              wrote: > I'd certainly like to see some machinery that allows us to > incorporate arbitrary distutils-based packages in Python source and > binary distributions and have them built, tested, and installed > alongside the interpreter core. > I think this would be the right approach to deal with many > components, including the XML and curses components. You can take a look at PEP-0206. I'd appreciate any feedback! (And of course, come to the DevDay session) -- For public key: finger moshez at debian.org | gpg --import 
                              
                              Debian - All the power, without the silly hat. From mal at lemburg.com Fri Feb 9 14:59:54 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 14:59:54 +0100 Subject: [Python-Dev] Removing the implicit str() call from printing API Message-ID: <3A83F7DA.A94AB88E@lemburg.com> There was some discussion about this subject before, but nothing much happened, so here we go again... Printing in Python is a rather complicated task. It involves many different APIs, flags, etc. Deep down in the printing machinery there is a hidden call to str() which converts the to be printed object into a string object. This is fine for non-string objects like numbers, but causes trouble when it comes to printing Unicode objects due to the auto-conversions this causes. There is a patch on SF which tries to remedy this, but it introduces a special attribute to maintain backward compatibility: http://sourceforge.net/patch/?func=detailpatch&patch_id=103685&group_id=5470 I don't really like the idea to add such an attribute to the file object. Instead, I think that we should simply pass along Unicode objects as-is to the file object's .write() method and have the method take care of the conversion. This will break some code, since not all file-like objects expect non-strings as input to the .write() method, but I think this small code breakage is worth it as it allows us to redirect printing to streams which convert Unicode input into a specific output encoding. Thoughts ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From Barrett at stsci.edu Fri Feb 9 16:45:50 2001 From: Barrett at stsci.edu (Paul Barrett) Date: Fri, 9 Feb 2001 10:45:50 -0500 (EST) Subject: [Python-Dev] A Numerical Python BoF at Python 9 Message-ID: <14980.2832.659186.913578@nem-srvr.stsci.edu> I've been encouraged to set-up a BoF at Python 9 to discuss Numerical Python issues, specifically the design and implemenation of Numeric 2. I'd like to get a head count of those interested in attending such a BoF. So far there are 3 of us at STScI who are interested. -- Dr. Paul Barrett Space Telescope Science Institute Phone: 410-338-4475 ESS/Science Software Group FAX: 410-338-4767 Baltimore, MD 21218 From tiemann at redhat.com Fri Feb 9 16:53:53 2001 From: tiemann at redhat.com (Michael Tiemann) Date: Fri, 09 Feb 2001 10:53:53 -0500 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta References: 
                              
                              Message-ID: <3A841291.CAAAA3AD@redhat.com> Based on the responses I have seen, it appears that this is not the kind of issue we want to address in a .1 release. I talked with Matt Wilson, the most active Python developer here, and he's all for moving to 2.x for our next .0 product, but for compatibility reasons it sounds like the option of swapping 1.5 for 2.0 as python, or the requirement that both 1.5 and 2.x need to be on the core OS CD (which is always short of space) is problematic. OTOH, if somebody can make a really definitive statement that I've misinterpreted the responses, and that 2.x _as_ python should just work, and if it doesn't, it's a bug that needs to shake out, I can address that with our OS team. M Sean 'Shaleh' Perry wrote: > > On 07-Feb-2001 Moshe Zadka wrote: > > On Wed, 07 Feb 2001 02:39:11 -0500, Guido van Rossum 
                              
                              > > wrote: > >> The binaries should be called python1.5 and python2.0, and python > >> should be a symlink to whatever is the default one. > > > > No they shouldn't. Joey Hess wrote to debian-python about the problems > > such a scheme caused when Perl5.005 and Perl 5.6 tried to coexist. > > Guido, the problem lies in we have no default. The user may install only 2.x > or 1.5. Scripts that handle the symlink can fail and then the user is left > without a python. In the case where only one is installed, this is easy. > however in a packaged system where any number of pythons could exist, problems > arise. > > Now, the problem with perl was a bad one because the thing in charge of the > symlink was itself a perl script. From nas at python.ca Fri Feb 9 17:21:36 2001 From: nas at python.ca (Neil Schemenauer) Date: Fri, 9 Feb 2001 08:21:36 -0800 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <3A841291.CAAAA3AD@redhat.com>; from tiemann@redhat.com on Fri, Feb 09, 2001 at 10:53:53AM -0500 References: 
                              
                              <3A841291.CAAAA3AD@redhat.com> Message-ID: <20010209082136.A15525@glacier.fnational.com> On Fri, Feb 09, 2001 at 10:53:53AM -0500, Michael Tiemann wrote: > OTOH, if somebody can make a really definitive statement that I've > misinterpreted the responses, and that 2.x _as_ python should just work, > and if it doesn't, it's a bug that needs to shake out, I can address that > with our OS team. I'm not sure what you mean by "should just work". Source compatibility between 1.5.2 and 2.0 is very high. The 2.0 NEWS file should list all the changes (single argument append and socket addresses are the big ones). The two versions are _not_ binary compatible. Python bytecode and extension modules have to be recompiled. I don't know if this is a problem for the Red Hat 7.1 release. Neil From esr at thyrsus.com Fri Feb 9 17:30:17 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 9 Feb 2001 11:30:17 -0500 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010209082136.A15525@glacier.fnational.com>; from nas@python.ca on Fri, Feb 09, 2001 at 08:21:36AM -0800 References: 
                              
                              <3A841291.CAAAA3AD@redhat.com> <20010209082136.A15525@glacier.fnational.com> Message-ID: <20010209113017.A13505@thyrsus.com> Neil Schemenauer 
                              
                              : > I'm not sure what you mean by "should just work". Source > compatibility between 1.5.2 and 2.0 is very high. The 2.0 NEWS > file should list all the changes (single argument append and > socket addresses are the big ones). And that change only affected a misfeature that was never documented and has been deprecated for some time. -- 
                              Eric S. Raymond No kingdom can be secured otherwise than by arming the people. The possession of arms is the distinction between a freeman and a slave. -- "Political Disquisitions", a British republican tract of 1774-1775 From fredrik at pythonware.com Fri Feb 9 17:37:16 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Fri, 9 Feb 2001 17:37:16 +0100 Subject: [Python-Dev] PEPS, version control, release intervals References: <200102080726.AAA27240@localhost.localdomain> Message-ID: <0aab01c092b6$917e4a90$e46940d5@hagrid> Uche Ogbuji wrote: > I don't know the tale of SOAP. soaplib seems stuck at 0.8. it's stuck on 0.9.5, which is stuck in a perforce repository, waiting for more interoperability testing. real soon now... Cheers /F From mal at lemburg.com Fri Feb 9 18:05:15 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 18:05:15 +0100 Subject: [Python-Dev] Making the __import__ hook available early... Message-ID: <3A84234B.A7417A93@lemburg.com> There has been some discussion on the import-sig about using the __import__ hook for practically all imports, even early in the startup phase. This allows import hooks to completely take over the import mechanism even for the Python standard lib. Thomas Heller has provided a patch which I am currently checking. Basically all C level imports using PyImport_ImportModule() are then redirected to PyImport_Import() which uses the __import__ hook if available. My testing has so far not produced any strange effects. If anyone objects to this change, please speak up. Else, I'll check it in later today. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thomas.heller at ion-tof.com Fri Feb 9 18:20:55 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Fri, 9 Feb 2001 18:20:55 +0100 Subject: [Python-Dev] Making the __import__ hook available early... References: <3A84234B.A7417A93@lemburg.com> Message-ID: <024a01c092bc$a903f650$e000a8c0@thomasnotebook> > There has been some discussion on the import-sig about using > the __import__ hook for practically all imports, even early > in the startup phase. This allows import hooks to completely take > over the import mechanism even for the Python standard lib. > > Thomas Heller has provided a patch which I am currently checking. > Basically all C level imports using PyImport_ImportModule() > are then redirected to PyImport_Import() which uses the __import__ > hook if available. > > My testing has so far not produced any strange effects. If anyone > objects to this change, please speak up. Else, I'll check it in later > today. One remaining difference I noted between running 'rt.bat -d' with the CVS version and the patched version is that the former reported [56931 refs] and the latter [56923 refs]. Thomas From mal at lemburg.com Fri Feb 9 18:35:56 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 18:35:56 +0100 Subject: [Python-Dev] Making the __import__ hook available early... References: <3A84234B.A7417A93@lemburg.com> <024a01c092bc$a903f650$e000a8c0@thomasnotebook> Message-ID: <3A842A7C.46263743@lemburg.com> Thomas Heller wrote: > > > There has been some discussion on the import-sig about using > > the __import__ hook for practically all imports, even early > > in the startup phase. This allows import hooks to completely take > > over the import mechanism even for the Python standard lib. > > > > Thomas Heller has provided a patch which I am currently checking. > > Basically all C level imports using PyImport_ImportModule() > > are then redirected to PyImport_Import() which uses the __import__ > > hook if available. > > > > My testing has so far not produced any strange effects. If anyone > > objects to this change, please speak up. Else, I'll check it in later > > today. > > One remaining difference I noted between running 'rt.bat -d' with > the CVS version and the patched version is that the former > reported [56931 refs] and the latter [56923 refs]. This is probably due to the interning of strings; nothing to worry about, I guess. The patch implements the same refcounting as before the patch, so it is clearly not the cause of the different figures. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From jeremy at alum.mit.edu Fri Feb 9 18:45:04 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 9 Feb 2001 12:45:04 -0500 (EST) Subject: [Python-Dev] PEP status and python-dev summaries Message-ID: <14980.11424.134036.495048@w221.z064000254.bwi-md.dsl.cnc.net> I just scanned the responses on comp.lang.python to Andrew's announcement that he would stopping write the python-dev summaries. The respondents indicated that they found it hard to keep track of what was going on with python development, particularly PEPs. We're still learning how to use the PEP process. It's been better for 2.1 than for 2.0, but still has some problems. It sounds like the key problem has been involving the community outside python-dev. I would suggest a couple of changes, with the burden mostly falling on Barry and me: - Regular announcements of PEP creation and PEP status changes should be posted to comp.lang.python and c.l.p.a. - The release status PEPs should be kept up to date and regularly posted to the same groups. - We should have highly visible pointers from python.org to PEPs and other python development information. I'm sure we do this as part of the Zopification plans that Guido mentioned. - We should not approve PEPs that aren't announced on comp.lang.python with enough time for people to comment. Jeremy From skip at mojam.com Fri Feb 9 19:08:05 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 9 Feb 2001 12:08:05 -0600 (CST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010209113017.A13505@thyrsus.com> References: 
                              
                              <3A841291.CAAAA3AD@redhat.com> <20010209082136.A15525@glacier.fnational.com> <20010209113017.A13505@thyrsus.com> Message-ID: <14980.12805.682859.719700@beluga.mojam.com> Eric> Neil Schemenauer 
                              
                              : >> I'm not sure what you mean by "should just work". Source >> compatibility between 1.5.2 and 2.0 is very high. The 2.0 NEWS file >> should list all the changes (single argument append and socket >> addresses are the big ones). Eric> And that change only affected a misfeature that was never Eric> documented and has been deprecated for some time. Perhaps, but it had worked "forever". In fact, I seems to recall that example code in the Python distribution used the two-argument connect call for sockets. Skip From akuchlin at mems-exchange.org Fri Feb 9 20:35:26 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Fri, 09 Feb 2001 14:35:26 -0500 Subject: [Python-Dev] dl module Message-ID: 
                              
                              The dl module isn't automatically compiled by setup.py, and at least one patch on SourceForge adds it. Question: should it be compiled as a standard module? Using it can, according to the comments, cause core dumps if you're not careful. Question: does anyone actually use the dl module? If not, maybe it could be dropped. --amk From mal at lemburg.com Fri Feb 9 20:46:01 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 20:46:01 +0100 Subject: [Python-Dev] PEP announcements, and summaries References: 
                              
                              <3A7EF1A0.EDA4AD24@lemburg.com> <20010205141139.K733@thrak.cnri.reston.va.us> <14979.26950.415841.24705@cj42289-a.reston1.va.home.com> Message-ID: <3A8448F9.DCACBBBB@lemburg.com> "Fred L. Drake, Jr." wrote: > > Andrew Kuchling writes: > > * Work on the Batteries Included proposals & required infrastructure > > I'd certainly like to see some machinery that allows us to > incorporate arbitrary distutils-based packages in Python source and > binary distributions and have them built, tested, and installed > alongside the interpreter core. > I think this would be the right approach to deal with many > components, including the XML and curses components. Good idea... but then I've made the experience that different tools need different distutils command interfaces, e.g. my mx tools will use customized commands which provide extra functionality (e.g. some auto-configuration code) which is not present in the standard distutils distro. As a result we will have a common interface point (setup.py), but not necessarily the same commands and/or options. Still, this situation is already *much* better than having different install mechanisms altogether. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Fri Feb 9 20:54:17 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 20:54:17 +0100 Subject: [Python-Dev] dl module References: 
                              
                              Message-ID: <3A844AE9.AE2DD04@lemburg.com> Andrew Kuchling wrote: > > The dl module isn't automatically compiled by setup.py, and at least > one patch on SourceForge adds it. > > Question: should it be compiled as a standard module? Using it can, > according to the comments, cause core dumps if you're not careful. > > Question: does anyone actually use the dl module? If not, maybe it > could be dropped. For Windows there's a similar package (calldll I think it is called). Perhaps someone should take over maintenance for it and then make it available via Parnassus ?! The same could be done for e.g. soundex and other deprecated modules. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From guido at digicool.com Fri Feb 9 20:58:58 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 09 Feb 2001 14:58:58 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/plat-irix5 cddb.py,1.10,1.11 In-Reply-To: Your message of "Fri, 09 Feb 2001 14:39:36 EST." <20010209143936.B3340@thrak.cnri.reston.va.us> References: 
                              
                              <20010209143936.B3340@thrak.cnri.reston.va.us> Message-ID: <200102091958.OAA23039@cj20424-a.reston1.va.home.com> > On Fri, Feb 09, 2001 at 08:44:51AM -0800, Eric S. Raymond wrote: > >String method conversion. Andrew replied: > Regarding the large number of string method conversion check-ins: I > presume this is something else you discussed at LWE with Guido. Was > there anything else discussed that python-dev should know about, or > can help with? This was Eric's own initiative -- I was just as surprised as you, given the outcome of the last discussion on python-dev specifically about this. However, I don't mind that it's done, as long as there's no code breakage. Clearly, Eric went a bit fast for some modules (checking in syntax errors :-). --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Fri Feb 9 21:03:29 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 9 Feb 2001 15:03:29 -0500 Subject: [Python-Dev] Curious comment in some old libraries Message-ID: <20010209150329.A15086@thyrsus.com> Pursuant to a conversation Guido and I had in New York, I have gone through and converted the Python library code to use string methods wherever possible, removing a whole boatload of "import string" statements in the process. (This is one of those times when it's a really, *really* good thing that most modules have an attached self-test. I supplied a couple of these where they were lacking, and improved several of the existing test jigs.) One side-effect of the change is that nothing in the library uses splitfields or joinfields anymore. But in the process I found a curious contradiction: stringold.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! stringold.py: (split and splitfields are synonymous) stringold.py:splitfields = split string.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! string.py: (split and splitfields are synonymous) string.py:splitfields = split It certainly looks to me as though the "NB" comment is out of date. Is there some subtle and wicked reason it has not been removed? -- 
                              Eric S. Raymond This would be the best of all possible worlds, if there were no religion in it. -- John Adams, in a letter to Thomas Jefferson. From tim.one at home.com Fri Feb 9 21:04:15 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 9 Feb 2001 15:04:15 -0500 Subject: [Python-Dev] RE: global, was Re: None assigment In-Reply-To: <961fg0$etd$1@nnrp1.deja.com> Message-ID: 
                              
                              [Jeremy Hylton] > As Tim will explain in a post that hasn't made it to DejaNews yet, > earlier versions of Python did not define Neither does 2.1: changing the implementation didn't change the Ref Man, and the Ref Man still declines to define the semantics or promise that the behavior today will persist tomorrow. > the behavior of assignment Or any other reference. > before a global statement. > ... > It's unclear what we should happen in this case. It could be an error, > since it's dodgy and the behavior will change with 2.1. "Undefined behavior" is unPythonic and should be wiped out whenever possible. That these things were dodgy was known from the start, but when the language was just getting off the ground there were far more important things to do than generate errors for every conceivable abuse of the language. Now that the language is still getting off the ground 
                              
                              , that's still true. But changes in the meantime have made it much easier to identify some of these cases; like: > The recent round of compiler changes uses separate passes to determine a > name's scope and to generate code for loads and stores. The behavior of "global x" after a reference to x has never been defined, but it's never been reasonably easy to identify and complain about it. Now that name classification is done by design instead of by an afterthought "optimization pass", it should be much easier to gripe. +1 on making this an error now. And if 2.1 is relaxed to again allow "import *" at function scope in some cases, either that should at least raise a warning, or the Ref Man should be changed to say that's a defined use of the language. ambiguity-sucks-ly y'rs - tim From akuchlin at cnri.reston.va.us Fri Feb 9 21:04:54 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Fri, 9 Feb 2001 15:04:54 -0500 Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: <20010209150329.A15086@thyrsus.com>; from esr@thyrsus.com on Fri, Feb 09, 2001 at 03:03:29PM -0500 References: <20010209150329.A15086@thyrsus.com> Message-ID: <20010209150454.E3340@thrak.cnri.reston.va.us> On Fri, Feb 09, 2001 at 03:03:29PM -0500, Eric S. Raymond wrote: >Pursuant to a conversation Guido and I had in New York, I have gone through >string.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > >It certainly looks to me as though the "NB" comment is out of date. >Is there some subtle and wicked reason it has not been removed? Actually I think it's correct: >>> import string >>> string.split('a b c') ['a', 'b', 'c'] >>> string.split('a b c', ' ') ['a', '', 'b', 'c'] With no separator, it splits on runs of whitespace. With an explicit separator, it splits on *exactly* that separator. --amk From fdrake at acm.org Fri Feb 9 21:03:13 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 9 Feb 2001 15:03:13 -0500 (EST) Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: <20010209150329.A15086@thyrsus.com> References: <20010209150329.A15086@thyrsus.com> Message-ID: <14980.19713.280194.344112@cj42289-a.reston1.va.home.com> Eric S. Raymond writes: > string.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > string.py: (split and splitfields are synonymous) > string.py:splitfields = split > > It certainly looks to me as though the "NB" comment is out of date. > Is there some subtle and wicked reason it has not been removed? The comment is correct. splitfields(s) is synonymous with split(s), and splitfields(s, ' ') is synonymous with split(s, ' '). If the second arg is omitted, any stretch of whitespace is used as the separator, but if ' ' is supplied, exactly one space is used to split fields. split(s, None) is synonymous with split(s), splitfields(s), and splitfields(s, None). -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From guido at digicool.com Fri Feb 9 21:08:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 09 Feb 2001 15:08:11 -0500 Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: Your message of "Fri, 09 Feb 2001 15:03:29 EST." <20010209150329.A15086@thyrsus.com> References: <20010209150329.A15086@thyrsus.com> Message-ID: <200102092008.PAA23192@cj20424-a.reston1.va.home.com> > Pursuant to a conversation Guido and I had in New York, I have gone > through and converted the Python library code to use string methods > wherever possible, removing a whole boatload of "import string" > statements in the process. (But note that I didn't ask you to go ahead and do it. Last time when I started doing this I got quite a few comments from python-dev readers who thought it was a bad idea, so I backed off. It's up to you to convince them now. :-) > (This is one of those times when it's a really, *really* good thing that > most modules have an attached self-test. I supplied a couple of these > where they were lacking, and improved several of the existing test jigs.) Excellent! > One side-effect of the change is that nothing in the library uses splitfields > or joinfields anymore. But in the process I found a curious contradiction: > > stringold.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > stringold.py: (split and splitfields are synonymous) > stringold.py:splitfields = split > string.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > string.py: (split and splitfields are synonymous) > string.py:splitfields = split > > It certainly looks to me as though the "NB" comment is out of date. > Is there some subtle and wicked reason it has not been removed? Well, split and splitfields really *are* synonymous, but split(s, ' ') is *not* the same as split(s). The latter is the same as split(s, None) which splits on stretches of arbitrary whitespace and ignores leading and trailing whitespace. So the NB is still true... --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at digicool.com Fri Feb 9 21:15:47 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 9 Feb 2001 15:15:47 -0500 Subject: [Python-Dev] Curious comment in some old libraries References: <20010209150329.A15086@thyrsus.com> Message-ID: <14980.20467.174809.644067@anthem.wooz.org> >>>>> "ESR" == Eric S Raymond 
                              
                              writes: ESR> It certainly looks to me as though the "NB" comment is out of ESR> date. Is there some subtle and wicked reason it has not been ESR> removed? Look at stropmodule.c. split and splitfields have been identical at least since 08-Aug-1996. :) -------------------- snip snip -------------------- revision 2.23 date: 1996/08/08 19:16:15; author: guido; state: Exp; lines: +93 -17 Added lstrip() and rstrip(). Extended split() (and hence splitfields(), which is the same function) to support an optional third parameter giving the maximum number of delimiters to parse. -------------------- snip snip -------------------- -Barry From tim.one at home.com Fri Feb 9 21:19:25 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 9 Feb 2001 15:19:25 -0500 Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: <20010209150329.A15086@thyrsus.com> Message-ID: 
                              
                              [Eric S. Raymond] > ... > But in the process I found a curious contradiction: > > stringold.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > stringold.py: (split and splitfields are synonymous) > stringold.py:splitfields = split > string.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > string.py: (split and splitfields are synonymous) > string.py:splitfields = split > > It certainly looks to me as though the "NB" comment is out of date. > Is there some subtle and wicked reason it has not been removed? It's 100% accurate, but 99% misleading. Plain 100% accurate would be: # NB: split(s) is NOT the same as split(s, ' '). # And, by the way, since split is the same as splitfields, # it follows that # split(s) is NOT the same as splitfields(s, ' '). # either. Even better is to get rid of the NB comments, so I just did that. Thanks for pointing it out! From esr at thyrsus.com Fri Feb 9 21:23:35 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 9 Feb 2001 15:23:35 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/plat-irix5 cddb.py,1.10,1.11 In-Reply-To: <200102091958.OAA23039@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Feb 09, 2001 at 02:58:58PM -0500 References: 
                              
                              <20010209143936.B3340@thrak.cnri.reston.va.us> <200102091958.OAA23039@cj20424-a.reston1.va.home.com> Message-ID: <20010209152335.C15205@thyrsus.com> Guido van Rossum 
                              
                              : > Clearly, Eric went a bit fast for some modules > (checking in syntax errors :-). It was the oddest thing. The conversion was so mechanical that I found my attention wandering -- the result (as I noted in a couple of checkin comments) was that I occasionally hit ^C^C and triggered the commit a step too early. Sometimes Emacs makes things too easy! There were a couple of platform-specific modules I couldn't test completely, stuff like the two cddb.py versions. Other than that I'm pretty sure I didn't break anything. Where the test jigs looked lacking I beefed them up some. The only string imports left are the ones that have to be there because the code is using a string module constant like string.whitespace or one of the two odd functions that don't exist as methods, zfill and maketrans. Are there any plans to introduce boolean-valued string methods corresponding to the ctype.h functions? That would make it possible to remove most of the remaining imports. This was like old times. pulling an all-nighter to clean up a language library. I did a *lot* of work like this on Emacs back in the early 1990s. Count your blessings; the Python libraries are in far better shape. -- 
                              Eric S. Raymond Certainly one of the chief guarantees of freedom under any government, no matter how popular and respected, is the right of the citizens to keep and bear arms. [...] the right of the citizens to bear arms is just one guarantee against arbitrary government and one more safeguard against a tyranny which now appears remote in America, but which historically has proved to be always possible. -- Hubert H. Humphrey, 1960 From guido at digicool.com Fri Feb 9 21:27:16 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 09 Feb 2001 15:27:16 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/plat-irix5 cddb.py,1.10,1.11 In-Reply-To: Your message of "Fri, 09 Feb 2001 15:23:35 EST." <20010209152335.C15205@thyrsus.com> References: 
                              
                              <20010209143936.B3340@thrak.cnri.reston.va.us> <200102091958.OAA23039@cj20424-a.reston1.va.home.com> <20010209152335.C15205@thyrsus.com> Message-ID: <200102092027.PAA23403@cj20424-a.reston1.va.home.com> > The only string imports left are the ones that have to be there because > the code is using a string module constant like string.whitespace or > one of the two odd functions that don't exist as methods, zfill and > maketrans. Are there any plans to introduce boolean-valued string > methods corresponding to the ctype.h functions? That would make > it possible to remove most of the remaining imports. Yes, these already exist, e.g. s.islower(), s.isspace(). Note that they are locale dependent. > This was like old times. pulling an all-nighter to clean up a language > library. I did a *lot* of work like this on Emacs back in the early > 1990s. Count your blessings; the Python libraries are in far better > shape. Thanks! --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik at effbot.org Fri Feb 9 21:45:50 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Fri, 9 Feb 2001 21:45:50 +0100 Subject: [Python-Dev] Curious comment in some old libraries References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> Message-ID: <00e401c092d9$4aaa30c0$e46940d5@hagrid> guido wrote: > (But note that I didn't ask you to go ahead and do it. Last time when > I started doing this I got quite a few comments from python-dev > readers who thought it was a bad idea, so I backed off. It's up to > you to convince them now. :-) footnote: SRE is designed to work (and is being used) under 1.5.2. since I'd rather not maintain two separate versions, I hope it's okay to back out of some of eric's changes... Cheers /F From guido at digicool.com Fri Feb 9 21:46:45 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 09 Feb 2001 15:46:45 -0500 Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: Your message of "Fri, 09 Feb 2001 21:45:50 +0100." <00e401c092d9$4aaa30c0$e46940d5@hagrid> References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <00e401c092d9$4aaa30c0$e46940d5@hagrid> Message-ID: <200102092046.PAA23571@cj20424-a.reston1.va.home.com> > footnote: SRE is designed to work (and is being used) > under 1.5.2. since I'd rather not maintain two separate > versions, I hope it's okay to back out of some of eric's > changes... Fine. Please add a comment to the "import string" statement to explain this! --Guido van Rossum (home page: http://www.python.org/~guido/) From thomas.heller at ion-tof.com Fri Feb 9 21:48:52 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Fri, 9 Feb 2001 21:48:52 +0100 Subject: [Python-Dev] Curious comment in some old libraries References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <00e401c092d9$4aaa30c0$e46940d5@hagrid> Message-ID: <04b601c092d9$b5f2ca40$e000a8c0@thomasnotebook> > > footnote: SRE is designed to work (and is being used) > under 1.5.2. since I'd rather not maintain two separate > versions, I hope it's okay to back out of some of eric's > changes... The same is documented for distutils... Thomas From esr at thyrsus.com Fri Feb 9 22:17:18 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 9 Feb 2001 16:17:18 -0500 Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: <00e401c092d9$4aaa30c0$e46940d5@hagrid>; from fredrik@effbot.org on Fri, Feb 09, 2001 at 09:45:50PM +0100 References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <00e401c092d9$4aaa30c0$e46940d5@hagrid> Message-ID: <20010209161718.F15205@thyrsus.com> Fredrik Lundh 
                              
                              : > footnote: SRE is designed to work (and is being used) > under 1.5.2. since I'd rather not maintain two separate > versions, I hope it's okay to back out of some of eric's > changes... Not a problem for me. -- 
                              Eric S. Raymond It will be of little avail to the people, that the laws are made by men of their own choice, if the laws be so voluminous that they cannot be read, or so incoherent that they cannot be understood; if they be repealed or revised before they are promulgated, or undergo such incessant changes that no man, who knows what the law is to-day, can guess what it will be to-morrow. Law is defined to be a rule of action; but how can that be a rule, which is little known, and less fixed? -- James Madison, Federalist Papers 62 From tim.one at home.com Fri Feb 9 23:07:43 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 9 Feb 2001 17:07:43 -0500 Subject: [Python-Dev] Making the __import__ hook available early... In-Reply-To: <3A84234B.A7417A93@lemburg.com> Message-ID: 
                              
                              [M.-A. Lemburg] > There has been some discussion on the import-sig about using > the __import__ hook for practically all imports, even early > in the startup phase. This allows import hooks to completely take > over the import mechanism even for the Python standard lib. > > Thomas Heller has provided a patch which I am currently checking. > Basically all C level imports using PyImport_ImportModule() > are then redirected to PyImport_Import() which uses the __import__ > hook if available. > > My testing has so far not produced any strange effects. If anyone > objects to this change, please speak up. Else, I'll check it in > later today. I don't understand the change, from the above. Neither exactly what it does nor why it's being done. So, impossible to say. Was the patch posted to SourceForge? Does it have a bad effect on startup time? Is there any *conceivable* way in which it could change semantics? Or, if not, what's the point? From skip at mojam.com Fri Feb 9 23:21:30 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 9 Feb 2001 16:21:30 -0600 (CST) Subject: [Python-Dev] dl module In-Reply-To: <3A844AE9.AE2DD04@lemburg.com> References: 
                              
                              <3A844AE9.AE2DD04@lemburg.com> Message-ID: <14980.28010.224576.400800@beluga.mojam.com> MAL> The same could be done for e.g. soundex ... http://musi-cal.mojam.com/~skip/python/soundex.py S From mal at lemburg.com Fri Feb 9 23:32:14 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 23:32:14 +0100 Subject: [Python-Dev] Making the __import__ hook available early... References: 
                              
                              Message-ID: <3A846FEE.5BF5615A@lemburg.com> Tim Peters wrote: > > [M.-A. Lemburg] > > There has been some discussion on the import-sig about using > > the __import__ hook for practically all imports, even early > > in the startup phase. This allows import hooks to completely take > > over the import mechanism even for the Python standard lib. > > > > Thomas Heller has provided a patch which I am currently checking. > > Basically all C level imports using PyImport_ImportModule() > > are then redirected to PyImport_Import() which uses the __import__ > > hook if available. > > > > My testing has so far not produced any strange effects. If anyone > > objects to this change, please speak up. Else, I'll check it in > > later today. > > I don't understand the change, from the above. Neither exactly what it does > nor why it's being done. So, impossible to say. Was the patch posted to > SourceForge? Does it have a bad effect on startup time? Is there any > *conceivable* way in which it could change semantics? Or, if not, what's > the point? I've already checked it in, but for completeness ;-) ... The problem was that tools like Thomas Heller's pyexe, Gordon's installer and other similar tools which try to pack Python byte code into a single archive need to provide an import hook which then redirects imports to the archive. This was already well possible for third-party code, but some of the standard modules in the Python lib used PyImport_ImportModule() directly to import modules and this prevented the inclusion of the referenced modules in the archive. When no import hook is in place, the patch does not have any effect -- semantics are the same as before. Import performance for those few cases where PyImport_ImportModule() was used will be a tad slower, but probably negligable due to the overhead caused by the file IO. With the hook in place, the patch now properly redirects these low-level imports to the __import__ hook. Semantics will then be those which the __import__ hook defines. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From esr at thyrsus.com Fri Feb 9 23:51:52 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 9 Feb 2001 17:51:52 -0500 Subject: [Python-Dev] Propaganda of the deed and other topics In-Reply-To: <200102092008.PAA23192@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Feb 09, 2001 at 03:08:11PM -0500 References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> Message-ID: <20010209175152.H15205@thyrsus.com> Guido van Rossum 
                              
                              : > (But note that I didn't ask you to go ahead and do it. Last time when > I started doing this I got quite a few comments from python-dev > readers who thought it was a bad idea, so I backed off. It's up to > you to convince them now. :-) I'd forgotten that discussion. But, as a general comment... Propaganda of the deed, Guido. Sometimes this crew is too reflexively conservative for my taste. I have a repertoire of different responses when my desire to make progress collides with such conservatism; one of them, when I don't see substantive objections and believe I can deal with the political fallout more easily than living with the technical problem, is to just freakin' go ahead and *do* it. This makes some people nervous. That's OK with me -- I'd rather be seen as a bit of a loose cannon than just another lump of inertia. (If nothing else, I find the primate-territoriality reactions I get from the people I occasionally piss off entertaining to watch.) I pick my shots carefully, however, and as a result people usually conclude after the fact that this week's cowboy maneuver was a good thing even if they were a touch irritated with me at the time. In the particular case of the string-method cleanup, I did get the impression in New York that you wanted to attack this problem but for some reason felt you could not. I am strongly predisposed to be 
                              
                              helpful
                               in such situations, and let the chips fall where they may. So try not to be surprised if I do more stuff like this -- in fact, if you really don't want me to go cowboy on you occasionally you probably shouldn't talk about your wish-list in my presence. On the other hand, feel very free to reverse me and slap me down if I pull something that oversteps the bounds of prudence or politeness. Firstly, I'm not thin-skinned that way; nobody with my working style can afford to be. Secondly, as the BDFL you have both the right and the responsibility to rein me in; if I weren't cool with that I wouldn't be here. > > (This is one of those times when it's a really, *really* good thing that > > most modules have an attached self-test. I supplied a couple of these > > where they were lacking, and improved several of the existing test jigs.) > > Excellent! One of the possible futures I see for myself in this group, if both of the library PEPs you and I have contemplated go through and become policy, is as Keeper Of The Libraries analogously to the way that Fred Drake is Keeper Of The Documentation. I would enjoy this role; if I grow into it, you can expect to see me do a lot more active maintainence of this kind. There's another level to this that I should try to explain...among the known hazards of being an international celebrity and famously successful project lead is that one can start to believe one is too good to do ordinary work. In order to prevent myself from become bogotified in this way, I try to have at least project going at all times in which I am a core contributor but *not* the top banana. And I deliberately look for a stable to muck out occasionally, as I did last night and as I would do on a larger scale if I were the library keeper. Python looks like being my `follower' project for the foreseeable future. Take that as a compliment, Guido, because it is meant as one both professionally and personally. This crew may be (probably is) the most tasteful, talented and mature development group I have ever had the privilege to work with. I still rue the fact that I couldn't get you guys to come work for VA... -- 
                              Eric S. Raymond Alcohol still kills more people every year than all `illegal' drugs put together, and Prohibition only made it worse. Oppose the War On Some Drugs! From tim.one at home.com Sat Feb 10 00:13:02 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 9 Feb 2001 18:13:02 -0500 Subject: [Python-Dev] Making the __import__ hook available early... In-Reply-To: <3A846FEE.5BF5615A@lemburg.com> Message-ID: 
                              
                              [MAL] > I've already checked it in, but for completeness ;-) ... Thanks for the explanation. Sounds like a good idea to me too! From jeremy at alum.mit.edu Sat Feb 10 00:42:14 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 9 Feb 2001 18:42:14 -0500 (EST) Subject: [Python-Dev] Re: [Bug #131480] __test__() should auto-exec at compile time In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <14980.32854.34108.539649@w221.z064000254.bwi-md.dsl.cnc.net> I just closed the bug report quoted below with the following response: I don't agree that unit tests should run automatically. Nor do I think adding magic to the language to support unit tests is necessary when it is trivial to add some external mechanism. I guess this topic could be opened up for discussion if someone else disagrees with me. Regardless, though, it's too late for 2.1. Jeremy >>>>> ">" == noreply 
                              
                              writes: >> Bug #131480, was updated on 2001-Feb-07 18:44 Here is a current >> snapshot of the bug. >> Details: We can make unit testing as simple as writing the test >> code! Everyone agrees that unit tests are worth while. Python >> does a great job removing tedium from the job of the programmer. >> Unit test should run automatically. Here's a method everyone can >> agree to: >> Have the compiler check each module for a funtion with the >> special name '__test__' that takes no arguments. If it finds it >> it calls it. >> The problem of unit testing divides esiliy into two pieces: How >> to create the code and how to execute the code. There are many >> options in creating the code but I have never seen any nice >> solutions to run the code automatically "if __name__ == >> '__main__':" >> doesn't count since you have to do somthing special to call the >> code i.e. >> run it as a script. There are of course ways to run the test >> code automatically but the ways I have figured out run it on >> every import (way too often especially for long tests). I >> imagine there is a way to check to see if the module is loaded >> from a .pyc file and execute test code accouringly but this seems >> a bit kludgy. Here are the benifits of compile time >> auto-execution: >> - Compatible with every testing framework. >> - Called always and only when it needs to be executed. >> - So simple even micro projects 'scripts' can take advantage >> Disadvantages: >> - Another special name, '__test__' >> - If there are more please tell me! >> I looked around the source-code and think I see the location >> where we can do this. It's would be a piece of cake and the >> advantages far outway the disadvantages. If I get some support >> I'd love to incorporate the fix. >> Justin Shaw thomas.j.shaw at aero.org From jeremy at alum.mit.edu Sat Feb 10 01:28:12 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 9 Feb 2001 19:28:12 -0500 (EST) Subject: [Python-Dev] Python 2.1 release schedule Message-ID: <14980.35612.516421.741505@w221.z064000254.bwi-md.dsl.cnc.net> I updated the Python 2.1 release schedule (PEP 226): http://python.sourceforge.net/peps/pep-0226.html The schedule now has some realistic future release dates. The plan is to move to beta 1 before the Python conference, probably issue a second beta in mid- to late-March, and aim for a final release sometime in April. The six-week period between first beta and final release is about as long as the beta period for 2.0, which had many more significant changes. I have also added a section on open issues as we had in the 2.0 release schedule. If you are responsible for any major changes or fixes before the first beta, please add them to that section or send me mail about them. Remember that we are in feature freeze; only bug fixes between now and beta 1. Jeremy From tim.one at home.com Sat Feb 10 01:18:54 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 9 Feb 2001 19:18:54 -0500 Subject: [Python-Dev] Re: [Bug #131480] __test__() should auto-exec at compile time In-Reply-To: <14980.32854.34108.539649@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              [Jeremy Hylton] > I just closed the bug report quoted below with the following response: > > I don't agree that unit tests should run automatically. Nor do I > think adding magic to the language to support unit tests is > necessary when it is trivial to add some external mechanism. > > I guess this topic could be opened up for discussion if someone else > disagrees with me. Regardless, though, it's too late for 2.1. Justin had earlier brought this up on Python-Help. I'll attach a nice PDF doc he sent with more detail than the bug report. I had asked him to consider a PEP and have a public debate first; don't know why he filed a bug report instead; I recall I got more email about this, but it's so far down the stack now I'm not sure I'll ever find it again 
                              
                              . FWIW, I don't believe we should make this magical either, and there are practical problems that were overlooked; e.g., when Lib/ is on a read-only filesystem, Python *always* recompiles the libraries upon import. Not insurmountable, but again points out the need for open debate first. Justin, take it up on comp.lang.python. -------------- next part -------------- A non-text attachment was scrubbed... Name: IntegratedUnitTesting.pdf Type: application/pdf Size: 98223 bytes Desc: not available URL: 
                              
                              From fdrake at acm.org Sat Feb 10 04:09:58 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 9 Feb 2001 22:09:58 -0500 (EST) Subject: [Python-Dev] dl module In-Reply-To: <14980.28010.224576.400800@beluga.mojam.com> References: 
                              
                              <3A844AE9.AE2DD04@lemburg.com> <14980.28010.224576.400800@beluga.mojam.com> Message-ID: <14980.45318.877412.703109@cj42289-a.reston1.va.home.com> Skip Montanaro writes: > MAL> The same could be done for e.g. soundex ... > > http://musi-cal.mojam.com/~skip/python/soundex.py Given that Skip has published this module and that the C version can always be retrieved from CVS if anyone really wants it, and that soundex has been listed in the "Obsolete Modules" section in the documentation for quite some time, this is probably a good time to remove it from the source distribution. -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From fdrake at acm.org Sat Feb 10 04:21:20 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 9 Feb 2001 22:21:20 -0500 (EST) Subject: [Python-Dev] Propaganda of the deed and other topics In-Reply-To: <20010209175152.H15205@thyrsus.com> References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <20010209175152.H15205@thyrsus.com> Message-ID: <14980.46000.429567.347664@cj42289-a.reston1.va.home.com> Eric S. Raymond writes: > of them, when I don't see substantive objections and believe I can > deal with the political fallout more easily than living with the > technical problem, is to just freakin' go ahead and *do* it. I think this was the right thing to do in this case. A slap on the back for you! > One of the possible futures I see for myself in this group, if both of > the library PEPs you and I have contemplated go through and become > policy, is as Keeper Of The Libraries analogously to the way that Fred You haven't developed the right attitude, then: my self-granted title for this aspect of my efforts is "Documentation Tsar" -- and I don't mind exercising editorial control with my attitude firmly in place! ;-) > Python looks like being my `follower' project for the foreseeable > future. Take that as a compliment, Guido, because it is meant as one > both professionally and personally. This crew may be (probably is) > the most tasteful, talented and mature development group I have ever Thank you! That's a real compliment for all of us. > had the privilege to work with. I still rue the fact that I couldn't > get you guys to come work for VA... You & others from VA came mighty close! -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From mal at lemburg.com Sat Feb 10 13:43:39 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 10 Feb 2001 13:43:39 +0100 Subject: [Python-Dev] Removing the implicit str() call from printing API References: <3A83F7DA.A94AB88E@lemburg.com> Message-ID: <3A85377B.BC6EAB9B@lemburg.com> So far, noone has commented on this idea. I would like to go ahead and check in patch which passes through Unicode objects to the file-object's .write() method while leaving the standard str() call for all other objects in place. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ "M.-A. Lemburg" wrote: > > There was some discussion about this subject before, but nothing > much happened, so here we go again... > > Printing in Python is a rather complicated task. It involves many > different APIs, flags, etc. Deep down in the printing machinery > there is a hidden call to str() which converts the to be printed > object into a string object. > > This is fine for non-string objects like numbers, but causes trouble > when it comes to printing Unicode objects due to the auto-conversions > this causes. > > There is a patch on SF which tries to remedy this, but it introduces > a special attribute to maintain backward compatibility: > > http://sourceforge.net/patch/?func=detailpatch&patch_id=103685&group_id=5470 > > I don't really like the idea to add such an attribute to the > file object. Instead, I think that we should simply pass along > Unicode objects as-is to the file object's .write() method and > have the method take care of the conversion. > > This will break some code, since not all file-like objects expect > non-strings as input to the .write() method, but I think this small > code breakage is worth it as it allows us to redirect printing > to streams which convert Unicode input into a specific output > encoding. > > Thoughts ? > > -- > Marc-Andre Lemburg > ______________________________________________________________________ > Company: http://www.egenix.com/ > Consulting: http://www.lemburg.com/ > Python Pages: http://www.lemburg.com/python/ > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev From fredrik at effbot.org Sat Feb 10 14:01:13 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Sat, 10 Feb 2001 14:01:13 +0100 Subject: [Python-Dev] Removing the implicit str() call from printing API References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> Message-ID: <010f01c09361$8ff82910$e46940d5@hagrid> mal wrote: > I would like to go ahead and check in patch which passes through > Unicode objects to the file-object's .write() method while leaving > the standard str() call for all other objects in place. +0 for Python 2.1 +1 for Python 2.2 Cheers /F From guido at digicool.com Sat Feb 10 15:03:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 09:03:03 -0500 Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: Your message of "Sat, 10 Feb 2001 14:01:13 +0100." <010f01c09361$8ff82910$e46940d5@hagrid> References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> <010f01c09361$8ff82910$e46940d5@hagrid> Message-ID: <200102101403.JAA27043@cj20424-a.reston1.va.home.com> > mal wrote: > > > I would like to go ahead and check in patch which passes through > > Unicode objects to the file-object's .write() method while leaving > > the standard str() call for all other objects in place. > > +0 for Python 2.1 > +1 for Python 2.2 I have not had the time to review any of the arguments for this, and I would be very disappointed if this happened without my involvement. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Sat Feb 10 15:15:19 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 09:15:19 -0500 Subject: [Python-Dev] dl module In-Reply-To: Your message of "Fri, 09 Feb 2001 22:09:58 EST." <14980.45318.877412.703109@cj42289-a.reston1.va.home.com> References: 
                              
                              <3A844AE9.AE2DD04@lemburg.com> <14980.28010.224576.400800@beluga.mojam.com> <14980.45318.877412.703109@cj42289-a.reston1.va.home.com> Message-ID: <200102101415.JAA27165@cj20424-a.reston1.va.home.com> > Skip Montanaro writes: > > MAL> The same could be done for e.g. soundex ... > > > > http://musi-cal.mojam.com/~skip/python/soundex.py > > Given that Skip has published this module and that the C version can > always be retrieved from CVS if anyone really wants it, and that > soundex has been listed in the "Obsolete Modules" section in the > documentation for quite some time, this is probably a good time to > remove it from the source distribution. Yes, go ahead. --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at lemburg.com Sat Feb 10 15:22:30 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 10 Feb 2001 15:22:30 +0100 Subject: [Python-Dev] Removing the implicit str() call from printing API References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> <010f01c09361$8ff82910$e46940d5@hagrid> <200102101403.JAA27043@cj20424-a.reston1.va.home.com> Message-ID: <3A854EA6.B8A8F7E2@lemburg.com> Guido van Rossum wrote: > > > mal wrote: > > > > > I would like to go ahead and check in patch which passes through > > > Unicode objects to the file-object's .write() method while leaving > > > the standard str() call for all other objects in place. > > > > +0 for Python 2.1 > > +1 for Python 2.2 > > I have not had the time to review any of the arguments for this, and I > would be very disappointed if this happened without my involvement. Ok, I'll postpone this for 2.2 then... don't want to disappoint our BDFL ;-) Perhaps we should rethink the whole complicated printing machinery in Python for 2.2 and come up with a more generic solution to the problem of letting to-be-printed objects pass through to the stream objects ?! Note that this is needed in order to be able to redirect sys.stdout to a codec which then converts Unicode to some external encoding. Currently this is not possible due to the implicit str() call in PyObject_Print(). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From guido at digicool.com Sat Feb 10 15:32:36 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 09:32:36 -0500 Subject: [Python-Dev] Re: __test__() should auto-exec at compile time In-Reply-To: Your message of "Fri, 09 Feb 2001 19:18:54 EST." 
                              
                              References: 
                              
                              Message-ID: <200102101432.JAA27274@cj20424-a.reston1.va.home.com> Running tests automatically whenever the source code is compiled is a bad idea. Python advertises itself as an interpreted language where compilation is invisible to the user. Tests often have side effects or take up serious amounts of resources, which would make them far from invisible. (For example, the socket test forks off a process and binds a socket to a port. While this port is not likely to be used by another server, it's not impossible, and one common effect (for me :-) is to find that two test runs interfere with each other. The socket test also takes about 10 seconds to run.) There are lots of situations where compilation occurs during the normal course of events, even for standard modules, and certainly for 3rd party library modules (for which the .pyc files aren't always created at installation time). So, running __test__ at every compilation is a no-no for me. That said, there are sane alternatives: e.g. distutils could run the tests automatically whenever it is asked to either build or install. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Sat Feb 10 15:39:47 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 09:39:47 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: Your message of "Fri, 09 Feb 2001 19:28:12 EST." <14980.35612.516421.741505@w221.z064000254.bwi-md.dsl.cnc.net> References: <14980.35612.516421.741505@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102101439.JAA27319@cj20424-a.reston1.va.home.com> > I updated the Python 2.1 release schedule (PEP 226): > http://python.sourceforge.net/peps/pep-0226.html Thanks, Jeremy! > The schedule now has some realistic future release dates. The plan is > to move to beta 1 before the Python conference, probably issue a > second beta in mid- to late-March, and aim for a final release > sometime in April. The six-week period between first beta and final > release is about as long as the beta period for 2.0, which had many > more significant changes. Feels good to me. > I have also added a section on open issues as we had in the 2.0 > release schedule. If you are responsible for any major changes or > fixes before the first beta, please add them to that section or send > me mail about them. Remember that we are in feature freeze; only bug > fixes between now and beta 1. Here are a few issues that I wrote down recently. I'm a bit out of touch so some of these may already have been resolved... - New schema for .pyc magic number? (Eric, Tim) - Call to C function without keyword args should pass NULL, not {}. (Jeremy) - Reduce the errors for "from ... import *" to only those cases where it's a real problem for nested functions. (Jeremy) - Long ago, someone asked that 10**-15 should return a float rather than raise a ValueError. I think this is an OK change, and unlikely to break code :-) There may be a few other special cases like this, and of course ints and longs should act the same way. (Tim?) --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Sat Feb 10 16:43:42 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sat, 10 Feb 2001 10:43:42 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: <200102101439.JAA27319@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sat, Feb 10, 2001 at 09:39:47AM -0500 References: <14980.35612.516421.741505@w221.z064000254.bwi-md.dsl.cnc.net> <200102101439.JAA27319@cj20424-a.reston1.va.home.com> Message-ID: <20010210104342.A20657@thyrsus.com> Guido van Rossum 
                              
                              : > - New schema for .pyc magic number? (Eric, Tim) It looked to me like Tim had a good scheme, but he never specified the latter (integrity-check) part of the header). -- 
                              Eric S. Raymond Everything that is really great and inspiring is created by the individual who can labor in freedom. -- Albert Einstein, in H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988. From jeremy at alum.mit.edu Sat Feb 10 05:57:51 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 9 Feb 2001 23:57:51 -0500 (EST) Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: <200102101439.JAA27319@cj20424-a.reston1.va.home.com> References: <14980.35612.516421.741505@w221.z064000254.bwi-md.dsl.cnc.net> <200102101439.JAA27319@cj20424-a.reston1.va.home.com> Message-ID: <14980.51791.171007.616771@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum 
                              
                              writes: >> I have also added a section on open issues as we had in the 2.0 >> release schedule. If you are responsible for any major changes >> or fixes before the first beta, please add them to that section >> or send me mail about them. Remember that we are in feature >> freeze; only bug fixes between now and beta 1. GvR> Here are a few issues that I wrote down recently. I'm a bit GvR> out of touch so some of these may already have been resolved... [...] GvR> - Call to C function without keyword args should pass NULL, not GvR> {}. (Jeremy) GvR> - Reduce the errors for "from ... import *" to only those cases GvR> where it's a real problem for nested functions. (Jeremy) [...] These two are done and checked into CVS. Jeremy From guido at digicool.com Sat Feb 10 20:49:34 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 14:49:34 -0500 Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: Your message of "Sat, 10 Feb 2001 15:22:30 +0100." <3A854EA6.B8A8F7E2@lemburg.com> References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> <010f01c09361$8ff82910$e46940d5@hagrid> <200102101403.JAA27043@cj20424-a.reston1.va.home.com> <3A854EA6.B8A8F7E2@lemburg.com> Message-ID: <200102101949.OAA28167@cj20424-a.reston1.va.home.com> > Ok, I'll postpone this for 2.2 then... don't want to disappoint > our BDFL ;-) The alternative would be for you to summarize why the proposed change can't possibly break code, this late in the 2.1 release game. :-) > Perhaps we should rethink the whole complicated printing machinery > in Python for 2.2 and come up with a more generic solution to the > problem of letting to-be-printed objects pass through to the > stream objects ?! Yes, please! I'd love it if you could write up a PEP that analyzes the issues and proposes a solution. (Without an analysis of the issues, there's not much point in proposing a solution, IMO.) > Note that this is needed in order to be able to redirect sys.stdout > to a codec which then converts Unicode to some external encoding. > Currently this is not possible due to the implicit str() call in > PyObject_Print(). Excellent. I agree that it's a shame that Unicode I/O is so hard at the moment. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Sat Feb 10 20:54:17 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 14:54:17 -0500 Subject: [Python-Dev] Propaganda of the deed and other topics In-Reply-To: Your message of "Fri, 09 Feb 2001 17:51:52 EST." <20010209175152.H15205@thyrsus.com> References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <20010209175152.H15205@thyrsus.com> Message-ID: <200102101954.OAA28189@cj20424-a.reston1.va.home.com> Fine Eric. Thanks for the compliment! In this particular case, I believe that the resistance was more against any official indication that the string module would become obsolete, than against making the changes in the standard library. It was just deemed too much work to make the changes, and because string wasn't going to be obsolete soon, there was little motivation. I'm glad your manic episode took care of that. :-) In general, though, I must ask you to err on the careful side when the possibility of breaking existing code exists. You can apply the cowboy approach to discussions as well as to coding! > Alcohol still kills more people every year than all `illegal' drugs put > together, and Prohibition only made it worse. Oppose the War On Some Drugs! Hey, finally a signature quote someone from the Netherlands wouldn't find offensive! --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Sat Feb 10 21:00:03 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sat, 10 Feb 2001 15:00:03 -0500 Subject: [Python-Dev] Propaganda of the deed and other topics In-Reply-To: <200102101954.OAA28189@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sat, Feb 10, 2001 at 02:54:17PM -0500 References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <20010209175152.H15205@thyrsus.com> <200102101954.OAA28189@cj20424-a.reston1.va.home.com> Message-ID: <20010210150003.A21451@thyrsus.com> Guido van Rossum 
                              
                              : > In general, though, I must ask you to err on the careful side when the > possibility of breaking existing code exists. I try to. You notice I haven't committed any changes to the interpreter core. This is a good example of what I mean by picking my shots carefully... -- 
                              Eric S. Raymond The right of the citizens to keep and bear arms has justly been considered as the palladium of the liberties of a republic; since it offers a strong moral check against usurpation and arbitrary power of rulers; and will generally, even if these are successful in the first instance, enable the people to resist and triumph over them." -- Supreme Court Justice Joseph Story of the John Marshall Court From mwh21 at cam.ac.uk Sat Feb 10 21:46:27 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 10 Feb 2001 20:46:27 +0000 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: Neil Schemenauer's message of "Fri, 9 Feb 2001 08:21:36 -0800" References: 
                              
                              <3A841291.CAAAA3AD@redhat.com> <20010209082136.A15525@glacier.fnational.com> Message-ID: 
                              
                              Neil Schemenauer 
                              
                              writes: > On Fri, Feb 09, 2001 at 10:53:53AM -0500, Michael Tiemann wrote: > > OTOH, if somebody can make a really definitive statement that I've > > misinterpreted the responses, and that 2.x _as_ python should just work, > > and if it doesn't, it's a bug that needs to shake out, I can address that > > with our OS team. > > I'm not sure what you mean by "should just work". Source > compatibility between 1.5.2 and 2.0 is very high. The 2.0 NEWS > file should list all the changes (single argument append and > socket addresses are the big ones). The two versions are _not_ > binary compatible. Python bytecode and extension modules have to > be recompiled. I don't know if this is a problem for the Red Hat > 7.1 release. Another issue is that there is an increasing body of code out there that doesn't work with 1.5.2. Practically all the code I write uses string methods and/or augmented assignment, for example, and I occasionally get email saying "I tried to run your code and got this AttributeEror: join error message". Also there have been some small changes at the C API level around memory management, and I'd much rather program to Python 2.0 here because its APIs are *better*. The world will be a better place when everybody runs Python 2.x, and distributions make a lot of difference here. Just my ?0.02. Cheers, M. -- To summarise the summary of the summary:- people are a problem. -- The Hitch-Hikers Guide to the Galaxy, Episode 12 From mal at lemburg.com Sat Feb 10 23:43:37 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 10 Feb 2001 23:43:37 +0100 Subject: [Python-Dev] Removing the implicit str() call from printing API References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> <010f01c09361$8ff82910$e46940d5@hagrid> <200102101403.JAA27043@cj20424-a.reston1.va.home.com> <3A854EA6.B8A8F7E2@lemburg.com> <200102101949.OAA28167@cj20424-a.reston1.va.home.com> Message-ID: <3A85C419.99EDCF14@lemburg.com> Guido van Rossum wrote: > > > Ok, I'll postpone this for 2.2 then... don't want to disappoint > > our BDFL ;-) > > The alternative would be for you to summarize why the proposed change > can't possibly break code, this late in the 2.1 release game. :-) Well, the only code it could possibly break is code which 1. expects a unique string object as argument 2. uses the s# parser marker and is used with an Unicode object containing non-ASCII characters Unfortunately, I'm not sure about how much code is out there which assumes 1. cStringIO.c is one example and given its heritage, there probably is a lot more in the Zope camp ;-) > > Perhaps we should rethink the whole complicated printing machinery > > in Python for 2.2 and come up with a more generic solution to the > > problem of letting to-be-printed objects pass through to the > > stream objects ?! > > Yes, please! I'd love it if you could write up a PEP that analyzes > the issues and proposes a solution. (Without an analysis of the > issues, there's not much point in proposing a solution, IMO.) Ok... on the plane to the conference, maybe. > > Note that this is needed in order to be able to redirect sys.stdout > > to a codec which then converts Unicode to some external encoding. > > Currently this is not possible due to the implicit str() call in > > PyObject_Print(). > > Excellent. I agree that it's a shame that Unicode I/O is so hard at > the moment. Since this is what we're after here, we might as well consider possibilities to get the input side of things equally in line with the codec idea, e.g. what would happen if .read() returns a Unicode object ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From andy at reportlab.com Sun Feb 11 00:43:08 2001 From: andy at reportlab.com (Andy Robinson) Date: Sat, 10 Feb 2001 23:43:08 -0000 Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: 
                              
                              Message-ID: 
                              
                              > So far, noone has commented on this idea. > > I would like to go ahead and check in patch which passes through > Unicode objects to the file-object's .write() method while leaving > the standard str() call for all other objects in place. > I'm behind this in principle. Here's an example of why: >>> tokyo_utf8 = "??" # the kanji for Tokyo, trust me... >>> print tokyo_utf8 # this is 8-bit and prints fine ?????? >>> tokyo_uni = codecs.utf_8_decode(tokyo_utf8)[0] >>> print tokyo_uni # try to print the kanji Traceback (innermost last): File "
                              
                              ", line 1, in ? UnicodeError: ASCII encoding error: ordinal not in range(128) >>> Let's say I am generating HTML pages and working with Unicode strings containing data > 127. It is far more natural to write a lot of print statements than to have to (a) concatenate all my strings or (b) do this on every line that prints something: print tokyo_utf8.encode(my_encoding) We could trivially make a file object which knows to convert the output to, say, Shift-JIS, or even redirect sys.stdout to such an object. Then we could just print Unicode strings to it. Effectively, the decision on whether a string is printable is deferred to the printing device. I think this is a good pattern which encourages people to work in Unicode. I know nothing of the Python internals and cannot help weigh up how serious the breakage is, but it would be a logical feature to add. - Andy Robinson From ping at lfw.org Sun Feb 11 01:22:48 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 10 Feb 2001 16:22:48 -0800 (PST) Subject: [Python-Dev] Fatal scoping error from the twilight zone Message-ID: 
                              
                              Houston, we may have a problem... The following harmless-looking function: def getpager(): """Decide what method to use for paging through text.""" if type(sys.stdout) is not types.FileType: return plainpager if not sys.stdin.isatty() or not sys.stdout.isatty(): return plainpager if os.environ.has_key('PAGER'): return lambda text: pipepager(text, os.environ['PAGER']) if sys.platform in ['win', 'win32', 'nt']: return lambda text: tempfilepager(text, 'more') if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: return lambda text: pipepager(text, 'less') import tempfile filename = tempfile.mktemp() open(filename, 'w').close() try: if hasattr(os, 'system') and os.system('more %s' % filename) == 0: return lambda text: pipepager(text, 'more') else: return ttypager finally: os.unlink(filename) produces localhost[1047]% ./python ~/dev/htmldoc/pydoc.py Fatal Python error: unknown scope for pipepager in getpager(5) in /home/ping/dev/htmldoc/pydoc.py Aborted (core dumped) localhost[1048]% with a clean build on a CVS tree that i updated just minutes ago. I was able to reduce this test case to the following: localhost[1011]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda t: pipepager(t, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) but not before coming across a bewildering series of working and non-working cases that left me wondering whether i was hallucinating. Strange as it may seem, for example, replacing the string constant 'x' with a variable makes the latter example work. Even stranger, choosing a different name for the variable t can make it work in some cases but not others! Please try the following script and see if you get weird results: code = '''def getpager(): if os.environ.has_key('x'): return lambda %s: pipepager(%s, os.environ['x'])''' import string, os, sys results = {} for char in string.letters: f = open('/tmp/test.py', 'w') f.write(code % (char, char) + '\n') f.close() sys.stderr.write('%s: ' % char) status = os.system('python /tmp/test.py > /dev/null') >> 8 sys.stderr.write('%s\n' % status) results.setdefault(status, []).append(char) for status in results.keys(): if not status: print 'Python likes these letters:', else: print 'Status %d for these letters:' % status, print results[status] I get this, consistently every time! Status 134 for these letters: ['b', 'c', 'd', 'g', 'h', 'j', 'k', 'l', 'o', 'p', 'r', 's', 't', 'w', 'x', 'z', 'B', 'C', 'D', 'G', 'H', 'J', 'K', 'L', 'O', 'P', 'R', 'S', 'T', 'W', 'X', 'Z'] Python likes these letters: ['a', 'e', 'f', 'i', 'm', 'n', 'q', 'u', 'v', 'y', 'A', 'E', 'F', 'I', 'M', 'N', 'Q', 'U', 'V', 'Y'] A complete log of my interactive sessions is attached. I hope somebody can reproduce at least some of this to assure me that i'm not going mad. :) -- ?!ng Happiness comes more from loving than being loved; and often when our affection seems wounded it is is only our vanity bleeding. To love, and to be hurt often, and to love again--this is the brave and happy life. -- J. E. Buchrose -------------- next part -------------- localhost[1001]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... """Decide what method to use for paging through text.""" ... if type(sys.stdout) is not types.FileType: ... return plainpager ... if not sys.stdin.isatty() or not sys.stdout.isatty(): ... return plainpager ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, os.environ['PAGER']) ... if sys.platform in ['win', 'win32', 'nt']: ... return lambda text: tempfilepager(text, 'more') ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text, 'less') ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1002]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... """Decide what method to use for paging through text.""" ... if type(sys.stdout) is not types.FileType: ... return plainpager ... if not sys.stdin.isatty() or not sys.stdout.isatty(): ... return plainpager ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, os.environ['PAGER']) ... if sys.platform in ['win', 'win32', 'nt']: ... return lambda text: tempfilepager(text, 'more') ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text, 'less') ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1003]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... return lambda text: pipepager(text) ... >>> def getpager(): ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, os.environ['PAGER']) ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text, 'less') ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1004]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def f(): ... if a: ... return lambda t: g(t) ... if b: ... return lambda t: h(t) ... >>> def getpager(): ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text) ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text) ... >>> def getpager(): ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, 1) ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text, 1) ... >>> def getpager(): ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, os.environ['PAGER']) ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text, 1) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1005]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def f() File "
                              
                              ", line 1 def f() ^ SyntaxError: invalid syntax >>> localhost[1006]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def f(): ... if os.environ.has_key(x): ... return lambda y: z(y, os.environ[x]) ... >>> def getpager(): ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, os.environ['PAGER']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1007]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... if os.environ.has_key(x): ... return lambda text: pipepager(text, os.environ[x]) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda text: pipepager(text, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1008]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def f(): ... if os.environ.has_key('x'): ... return lambda y: z(y, os.environ['x']) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda text: pipepager(text, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1009]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda y: z(y, os.environ['x']) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda text: z(text, os.environ['x']) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda y: pipepager(y, os.environ['x']) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda te: pipepager(te, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1010]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda t: pipepager(t, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1011]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda y: pipepager(y, os.environ['x']) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda h: pipepager(h, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1012]% localhost[1012]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> code = '''def getpager(): ... if os.environ.has_key('x'): ... return lambda %s: pipepager(%s, os.environ['x'])''' >>> >>> import string >>> import os >>> for char in string.letters: ... f = open('/tmp/test.py', 'w') ... f.write(code % (char, char) + '\n') ... f.close() ... import sys ... sys.stderr.write('%s: ' % char) ... r = os.system('python /tmp/test.py > /dev/null') ... sys.stderr.write('%s\n' % r) ... a: 0 b: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 c: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 d: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 e: 0 f: 0 g: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 h: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 i: 0 j: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 k: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 l: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 m: 0 n: 0 o: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 p: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 q: 0 r: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 s: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 t: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 u: 0 v: 0 w: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 x: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 y: 0 z: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 A: 0 B: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 C: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 D: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 E: 0 F: 0 G: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 H: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 I: 0 J: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 K: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 L: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 M: 0 N: 0 O: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 P: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 Q: 0 R: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 S: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 T: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 U: 0 V: 0 W: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 X: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 Y: 0 Z: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 >>> localhost[1013]% cat /tmp/multitest.py code = '''def getpager(): if os.environ.has_key('x'): return lambda %s: pipepager(%s, os.environ['x'])''' import string, os, sys results = {} for char in string.letters: f = open('/tmp/test.py', 'w') f.write(code % (char, char) + '\n') f.close() sys.stderr.write('%s: ' % char) status = os.system('python /tmp/test.py > /dev/null') >> 8 sys.stderr.write('%s\n' % status) results.setdefault(status, []).append(char) for status in results.keys(): if not status: print 'Python likes these letters:', else: print 'Status %d for these letters:' % status, print results[status] localhost[1014]% ./python /tmp/multitest.py a: 0 b: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 c: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 d: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 e: 0 f: 0 g: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 h: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 i: 0 j: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 k: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 l: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 m: 0 n: 0 o: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 p: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 q: 0 r: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 s: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 t: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 u: 0 v: 0 w: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 x: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 y: 0 z: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 A: 0 B: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 C: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 D: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 E: 0 F: 0 G: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 H: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 I: 0 J: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 K: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 L: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 M: 0 N: 0 O: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 P: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 Q: 0 R: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 S: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 T: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 U: 0 V: 0 W: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 X: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 Y: 0 Z: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 Status 134 for these letters: ['b', 'c', 'd', 'g', 'h', 'j', 'k', 'l', 'o', 'p', 'r', 's', 't', 'w', 'x', 'z', 'B', 'C', 'D', 'G', 'H', 'J', 'K', 'L', 'O', 'P', 'R', 'S', 'T', 'W', 'X', 'Z'] Python likes these letters: ['a', 'e', 'f', 'i', 'm', 'n', 'q', 'u', 'v', 'y', 'A', 'E', 'F', 'I', 'M', 'N', 'Q', 'U', 'V', 'Y'] localhost[1015]% From ping at lfw.org Sun Feb 11 01:41:41 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 10 Feb 2001 16:41:41 -0800 (PST) Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Sat, 10 Feb 2001, Andy Robinson wrote: > > So far, noone has commented on this idea. > > > > I would like to go ahead and check in patch which passes through > > Unicode objects to the file-object's .write() method while leaving > > the standard str() call for all other objects in place. > > > I'm behind this in principle. Here's an example of why: > > >>> tokyo_utf8 = "??" # the kanji for Tokyo, trust me... > >>> print tokyo_utf8 # this is 8-bit and prints fine > ?????? > >>> tokyo_uni = codecs.utf_8_decode(tokyo_utf8)[0] > >>> print tokyo_uni # try to print the kanji > Traceback (innermost last): > File "
                              
                              ", line 1, in ? > UnicodeError: ASCII encoding error: ordinal not in range(128) Something like the following looks reasonable to me; the added complexity is that the file object now remembers an encoder/decoder pair in its state (the API might give the appearance of remembering just the codec name, but we want to avoid doing codecs.lookup() on every write), and uses it whenever write() is passed a Unicode object. >>> file = open('outputfile', 'w', 'utf-8') >>> file.encoding 'utf-8' >>> file.write(tokyo_uni) # tokyo_utf8 gets written to file >>> file.close() Open questions: - If an encoding is specified, should file.read() then always return Unicode objects? - If an encoding is specified, should file.write() only accept Unicode objects and not bytestrings? - Is the encoding attribute mutable? (I would prefer not, but then how to apply an encoding to sys.stdout?) Side question: i noticed that the Lib/encodings directory supports quite a few code pages, including Greek, Russian, but there are no ISO-2022 CJK or JIS codecs. Is this just because no one felt like writing one, or is there a reason not to include one? It seems to me it might be nice to include some codecs for the most common CJK encodings -- that recent note on the popularity of Python in Korea comes to mind. -- ?!ng Happiness comes more from loving than being loved; and often when our affection seems wounded it is is only our vanity bleeding. To love, and to be hurt often, and to love again--this is the brave and happy life. -- J. E. Buchrose From ping at lfw.org Sun Feb 11 02:42:49 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 10 Feb 2001 17:42:49 -0800 (PST) Subject: [Python-Dev] import succeeds on second try? Message-ID: 
                              
                              This is weird: localhost[1118]% ll spam* -rw-r--r-- 1 ping users 69 Feb 10 17:40 spam.py localhost[1119]% ll eggs* /bin/ls: eggs*: No such file or directory localhost[1120]% cat spam.py a = 1 print 'hello' import eggs # no such file print 'goodbye' b = 2 localhost[1121]% python Python 2.1a2 (#22, Feb 10 2001, 16:15:14) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import spam hello Traceback (most recent call last): File "
                              
                              ", line 1, in ? File "spam.py", line 3, in ? import eggs # no such file ImportError: No module named eggs >>> import spam >>> dir(spam) ['__builtins__', '__doc__', '__file__', '__name__', 'a'] >>> localhost[1122]% ll spam* -rw-r--r-- 1 ping users 69 Feb 10 17:40 spam.py -rw-r--r-- 1 ping users 208 Feb 10 17:41 spam.pyc localhost[1123]% ll eggs* /bin/ls: eggs*: No such file or directory Why did Python write spam.pyc after the import failed? -- ?!ng Happiness comes more from loving than being loved; and often when our affection seems wounded it is is only our vanity bleeding. To love, and to be hurt often, and to love again--this is the brave and happy life. -- J. E. Buchrose From ping at lfw.org Sun Feb 11 03:20:30 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 10 Feb 2001 18:20:30 -0800 (PST) Subject: [Python-Dev] test_inspect fails again: segfault in compile Message-ID: 
                              
                              Sorry to be the bearer of so much bad news today. When i run the tests for inspect.py, a recently-built Python crashes: localhost[1168]% !p python test_inspect.py Segmentation fault (core dumped) gdb says: (gdb) where #0 0x806021c in symtable_params (st=0x80e9678, n=0x8149340) at Python/compile.c:4633 #1 0x806004f in symtable_funcdef (st=0x80e9678, n=0x8111368) at Python/compile.c:4541 #2 0x805fc6e in symtable_node (st=0x80e9678, n=0x80eaac0) at Python/compile.c:4417 #3 0x8060007 in symtable_node (st=0x80e9678, n=0x811c1c0) at Python/compile.c:4528 #4 0x805f23e in symtable_build (c=0xbffff2a4, n=0x811c1c0) at Python/compile.c:3974 #5 0x805ee8a in jcompile (n=0x811c1c0, filename=0x81268e4 "@test", base=0x0) at Python/compile.c:3853 #6 0x805ed7c in PyNode_Compile (n=0x811c1c0, filename=0x81268e4 "@test") at Python/compile.c:3806 #7 0x8063476 in parse_source_module (pathname=0x81268e4 "@test", fp=0x81271c0) at Python/import.c:611 #8 0x8063637 in load_source_module (name=0x812a1dc "testmod", pathname=0x81268e4 "@test", fp=0x81271c0) at Python/import.c:731 #9 0x8065161 in imp_load_source (self=0x0, args=0x80e838c) at Python/import.c:2178 #10 0x8058655 in call_cfunction (func=0x8124a08, arg=0x80e838c, kw=0x0) at Python/ceval.c:2749 #11 0x8058550 in call_object (func=0x8124a08, arg=0x80e838c, kw=0x0) at Python/ceval.c:2703 #12 0x8058c61 in do_call (func=0x8124a08, pp_stack=0xbffff908, na=2, nk=0) at Python/ceval.c:3014 #13 0x8057228 in eval_code2 (co=0x815eff0, globals=0x80c3544, locals=0x80c3544, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:1895 #14 0x8054787 in PyEval_EvalCode (co=0x815eff0, globals=0x80c3544, locals=0x80c3544) at Python/ceval.c:336 #15 0x8068f44 in run_node (n=0x8106f30, filename=0xbffffbb4 "test_inspect.py", globals=0x80c3544, locals=0x80c3544) at Python/pythonrun.c:920 #16 0x8068f09 in run_err_node (n=0x8106f30, filename=0xbffffbb4 "test_inspect.py", globals=0x80c3544, locals=0x80c3544) at Python/pythonrun.c:908 #17 0x8068ee7 in PyRun_FileEx (fp=0x80bf6a8, filename=0xbffffbb4 "test_inspect.py", start=257, globals=0x80c3544, locals=0x80c3544, closeit=1) at Python/pythonrun.c:900 #18 0x80686bc in PyRun_SimpleFileEx (fp=0x80bf6a8, filename=0xbffffbb4 "test_inspect.py", closeit=1) at Python/pythonrun.c:613 #19 0x8068310 in PyRun_AnyFileEx (fp=0x80bf6a8, filename=0xbffffbb4 "test_inspect.py", closeit=1) at Python/pythonrun.c:467 #20 0x8051bb0 in Py_Main (argc=1, argv=0xbffffa84) at Modules/main.c:292 #21 0x80516d6 in main (argc=2, argv=0xbffffa84) at Modules/python.c:10 #22 0x40064cb3 in __libc_start_main (main=0x80516c8 
                              
                              , argc=2, argv=0xbffffa84, init=0x8050bd8 <_init>, fini=0x80968dc <_fini>, rtld_fini=0x4000a350 <_dl_fini>, stack_end=0xbffffa7c) at ../sysdeps/generic/libc-start.c:78 The contents of test_inspect.py and of @test (the Python module which test_inspect writes out and imports) are attached. n_lineno is 8, which points to the hairy line: def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h): The following smaller test case reproduces the error: Python 2.1a2 (#22, Feb 10 2001, 16:15:14) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h): ... pass ... Segmentation fault (core dumped) After further testing, it seems to come down to this: Python 2.1a2 (#22, Feb 10 2001, 16:15:14) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def spam(a, b): pass ... >>> def spam(a=3, b): pass ... SyntaxError: non-default argument follows default argument >>> def spam(a=3, b=4): pass ... >>> def spam(a, (b,)): pass ... >>> def spam(a=3, (b,)): pass ... Segmentation fault (core dumped) Python 2.1a2 (#22, Feb 10 2001, 16:15:14) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def spam(a=3, (b,)=(4,)): pass ... Segmentation fault (core dumped) -- ?!ng Happiness comes more from loving than being loved; and often when our affection seems wounded it is is only our vanity bleeding. To love, and to be hurt often, and to love again--this is the brave and happy life. -- J. E. Buchrose -------------- next part -------------- source = '''# line 1 'A module docstring.' import sys, inspect # line 5 # line 7 def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h): eggs(b + d, c + f) # line 11 def eggs(x, y): "A docstring." global fr, st fr = inspect.currentframe() st = inspect.stack() p = x q = y / 0 # line 20 class StupidGit: """A longer, indented docstring.""" # line 27 def abuse(self, a, b, c): """Another \tdocstring containing \ttabs \t """ self.argue(a, b, c) # line 40 def argue(self, a, b, c): try: spam(a, b, c) except: self.ex = sys.exc_info() self.tr = inspect.trace() # line 48 class MalodorousPervert(StupidGit): pass class ParrotDroppings: pass class FesteringGob(MalodorousPervert, ParrotDroppings): pass ''' from test_support import TestFailed, TESTFN import sys, imp, os, string def test(assertion, message, *args): if not assertion: raise TestFailed, message % args import inspect file = open(TESTFN, 'w') file.write(source) file.close() mod = imp.load_source('testmod', TESTFN) def istest(func, exp): obj = eval(exp) test(func(obj), '%s(%s)' % (func.__name__, exp)) for other in [inspect.isbuiltin, inspect.isclass, inspect.iscode, inspect.isframe, inspect.isfunction, inspect.ismethod, inspect.ismodule, inspect.istraceback]: if other is not func: test(not other(obj), 'not %s(%s)' % (other.__name__, exp)) git = mod.StupidGit() try: 1/0 except: tb = sys.exc_traceback istest(inspect.isbuiltin, 'sys.exit') istest(inspect.isbuiltin, '[].append') istest(inspect.isclass, 'mod.StupidGit') istest(inspect.iscode, 'mod.spam.func_code') istest(inspect.isframe, 'tb.tb_frame') istest(inspect.isfunction, 'mod.spam') istest(inspect.ismethod, 'mod.StupidGit.abuse') istest(inspect.ismethod, 'git.argue') istest(inspect.ismodule, 'mod') istest(inspect.istraceback, 'tb') classes = inspect.getmembers(mod, inspect.isclass) test(classes == [('FesteringGob', mod.FesteringGob), ('MalodorousPervert', mod.MalodorousPervert), ('ParrotDroppings', mod.ParrotDroppings), ('StupidGit', mod.StupidGit)], 'class list') tree = inspect.getclasstree(map(lambda x: x[1], classes), 1) test(tree == [(mod.ParrotDroppings, ()), (mod.StupidGit, ()), [(mod.MalodorousPervert, (mod.StupidGit,)), [(mod.FesteringGob, (mod.MalodorousPervert, mod.ParrotDroppings)) ] ] ], 'class tree') functions = inspect.getmembers(mod, inspect.isfunction) test(functions == [('eggs', mod.eggs), ('spam', mod.spam)], 'function list') test(inspect.getdoc(mod) == 'A module docstring.', 'getdoc(mod)') test(inspect.getcomments(mod) == '# line 1\n', 'getcomments(mod)') test(inspect.getmodule(mod.StupidGit) == mod, 'getmodule(mod.StupidGit)') test(inspect.getfile(mod.StupidGit) == TESTFN, 'getfile(mod.StupidGit)') test(inspect.getsourcefile(mod.spam) == TESTFN, 'getsourcefile(mod.spam)') test(inspect.getsourcefile(git.abuse) == TESTFN, 'getsourcefile(git.abuse)') def sourcerange(top, bottom): lines = string.split(source, '\n') return string.join(lines[top-1:bottom], '\n') + '\n' test(inspect.getsource(git.abuse) == sourcerange(29, 39), 'getsource(git.abuse)') test(inspect.getsource(mod.StupidGit) == sourcerange(21, 46), 'getsource(mod.StupidGit)') test(inspect.getdoc(mod.StupidGit) == 'A longer,\n\nindented\n\ndocstring.', 'getdoc(mod.StupidGit)') test(inspect.getdoc(git.abuse) == 'Another\n\ndocstring\n\ncontaining\n\ntabs\n\n', 'getdoc(git.abuse)') test(inspect.getcomments(mod.StupidGit) == '# line 20\n', 'getcomments(mod.StupidGit)') args, varargs, varkw, defaults = inspect.getargspec(mod.eggs) test(args == ['x', 'y'], 'mod.eggs args') test(varargs == None, 'mod.eggs varargs') test(varkw == None, 'mod.eggs varkw') test(defaults == None, 'mod.eggs defaults') test(inspect.formatargspec(args, varargs, varkw, defaults) == '(x, y)', 'mod.eggs formatted argspec') args, varargs, varkw, defaults = inspect.getargspec(mod.spam) test(args == ['a', 'b', 'c', 'd', ['e', ['f']]], 'mod.spam args') test(varargs == 'g', 'mod.spam varargs') test(varkw == 'h', 'mod.spam varkw') test(defaults == (3, (4, (5,))), 'mod.spam defaults') test(inspect.formatargspec(args, varargs, varkw, defaults) == '(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h)', 'mod.spam formatted argspec') git.abuse(7, 8, 9) istest(inspect.istraceback, 'git.ex[2]') istest(inspect.isframe, 'mod.fr') test(len(git.tr) == 2, 'trace() length') test(git.tr[0][1:] == ('@test', 9, 'spam', [' eggs(b + d, c + f)\n'], 0), 'trace() row 1') test(git.tr[1][1:] == ('@test', 18, 'eggs', [' q = y / 0\n'], 0), 'trace() row 2') test(len(mod.st) >= 5, 'stack() length') test(mod.st[0][1:] == ('@test', 16, 'eggs', [' st = inspect.stack()\n'], 0), 'stack() row 1') test(mod.st[1][1:] == ('@test', 9, 'spam', [' eggs(b + d, c + f)\n'], 0), 'stack() row 2') test(mod.st[2][1:] == ('@test', 43, 'argue', [' spam(a, b, c)\n'], 0), 'stack() row 3') test(mod.st[3][1:] == ('@test', 39, 'abuse', [' self.argue(a, b, c)\n'], 0), 'stack() row 4') # row 4 is in test_inspect.py args, varargs, varkw, locals = inspect.getargvalues(mod.fr) test(args == ['x', 'y'], 'mod.fr args') test(varargs == None, 'mod.fr varargs') test(varkw == None, 'mod.fr varkw') test(locals == {'x': 11, 'p': 11, 'y': 14}, 'mod.fr locals') test(inspect.formatargvalues(args, varargs, varkw, locals) == '(x=11, y=14)', 'mod.fr formatted argvalues') args, varargs, varkw, locals = inspect.getargvalues(mod.fr.f_back) test(args == ['a', 'b', 'c', 'd', ['e', ['f']]], 'mod.fr.f_back args') test(varargs == 'g', 'mod.fr.f_back varargs') test(varkw == 'h', 'mod.fr.f_back varkw') test(inspect.formatargvalues(args, varargs, varkw, locals) == '(a=7, b=8, c=9, d=3, (e=4, (f=5,)), *g=(), **h={})', 'mod.fr.f_back formatted argvalues') os.unlink(TESTFN) -------------- next part -------------- # line 1 'A module docstring.' import sys, inspect # line 5 # line 7 def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h): eggs(b + d, c + f) # line 11 def eggs(x, y): "A docstring." global fr, st fr = inspect.currentframe() st = inspect.stack() p = x q = y / 0 # line 20 class StupidGit: """A longer, indented docstring.""" # line 27 def abuse(self, a, b, c): """Another docstring containing tabs """ self.argue(a, b, c) # line 40 def argue(self, a, b, c): try: spam(a, b, c) except: self.ex = sys.exc_info() self.tr = inspect.trace() # line 48 class MalodorousPervert(StupidGit): pass class ParrotDroppings: pass class FesteringGob(MalodorousPervert, ParrotDroppings): pass From guido at digicool.com Sun Feb 11 03:29:39 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 21:29:39 -0500 Subject: [Python-Dev] import succeeds on second try? In-Reply-To: Your message of "Sat, 10 Feb 2001 17:42:49 PST." 
                              References: 
                              
                              Message-ID: <200102110229.VAA29050@cj20424-a.reston1.va.home.com> > This is weird: > > localhost[1118]% ll spam* > -rw-r--r-- 1 ping users 69 Feb 10 17:40 spam.py > localhost[1119]% ll eggs* > /bin/ls: eggs*: No such file or directory > localhost[1120]% cat spam.py > a = 1 > print 'hello' > import eggs # no such file > print 'goodbye' > b = 2 > localhost[1121]% python > Python 2.1a2 (#22, Feb 10 2001, 16:15:14) > [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 > Type "copyright", "credits" or "license" for more information. > >>> import spam > hello > Traceback (most recent call last): > File "
                              
                              ", line 1, in ? > File "spam.py", line 3, in ? > import eggs # no such file > ImportError: No module named eggs > >>> import spam > >>> dir(spam) > ['__builtins__', '__doc__', '__file__', '__name__', 'a'] > >>> > localhost[1122]% ll spam* > -rw-r--r-- 1 ping users 69 Feb 10 17:40 spam.py > -rw-r--r-- 1 ping users 208 Feb 10 17:41 spam.pyc > localhost[1123]% ll eggs* > /bin/ls: eggs*: No such file or directory > > Why did Python write spam.pyc after the import failed? That's standard stuff; happens all the time. 1. The module gets compiled to bytecode, and the compiled bytecode gets written to the .pyc file, before any attempt to execute is. 2. The spam module gets entered into sys.modules at the *start* of its execution, for a number of reasons having to do with mutually recursive modules. 3. The execution fails on the "import eggs" but that doesn't undo the sys.modules assignment. 4. The second import of spam finds an incomplete module in sys.modyles, but doesn't know that, so returns it. --Guido van Rossum (home page: http://www.python.org/~guido/) From ping at lfw.org Sun Feb 11 03:30:46 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 10 Feb 2001 18:30:46 -0800 (PST) Subject: [Python-Dev] import succeeds on second try? In-Reply-To: <200102110229.VAA29050@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              On Sat, 10 Feb 2001, Guido van Rossum wrote: > > That's standard stuff; happens all the time. Hrmm... it makes me feel icky. > 1. The module gets compiled to bytecode, and the compiled bytecode > gets written to the .pyc file, before any attempt to execute is. > > 2. The spam module gets entered into sys.modules at the *start* of its > execution, for a number of reasons having to do with mutually > recursive modules. > > 3. The execution fails on the "import eggs" but that doesn't undo the > sys.modules assignment. > > 4. The second import of spam finds an incomplete module in > sys.modyles, but doesn't know that, so returns it. Is there a reason not to insert step 3.5? 3.5. If the import fails, remove the incomplete module from sys.modules. -- ?!ng From guido at digicool.com Sun Feb 11 04:00:31 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 22:00:31 -0500 Subject: [Python-Dev] import succeeds on second try? In-Reply-To: Your message of "Sat, 10 Feb 2001 18:30:46 PST." 
                              
                              References: 
                              
                              Message-ID: <200102110300.WAA29163@cj20424-a.reston1.va.home.com> > On Sat, 10 Feb 2001, Guido van Rossum wrote: > > > > That's standard stuff; happens all the time. > > Hrmm... it makes me feel icky. Maybe, but so does the alternative (to me, anyway). > > 1. The module gets compiled to bytecode, and the compiled bytecode > > gets written to the .pyc file, before any attempt to execute is. > > > > 2. The spam module gets entered into sys.modules at the *start* of its > > execution, for a number of reasons having to do with mutually > > recursive modules. > > > > 3. The execution fails on the "import eggs" but that doesn't undo the > > sys.modules assignment. > > > > 4. The second import of spam finds an incomplete module in > > sys.modyles, but doesn't know that, so returns it. > > Is there a reason not to insert step 3.5? > > 3.5. If the import fails, remove the incomplete module from sys.modules. It's hard to prove that there are no other references to it, e.g. spam could have imported bacon which imports fine and imports spam (for a later recursive call). Then a second try to import spam would import bacon again but that bacon would have a reference to the first, incomplete copy of spam. In general, if I can help it, I want to be careful that I don't have multiple module objects claiming to be the same module around, because that multiplicity will come back to bite you when it matters that they are the same. Also, deleting the evidence makes it harder to inspect the smoking remains in a debugger. --Guido van Rossum (home page: http://www.python.org/~guido/) From andy at reportlab.com Sun Feb 11 10:18:55 2001 From: andy at reportlab.com (Andy Robinson) Date: Sun, 11 Feb 2001 09:18:55 -0000 Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: 
                              
                              Message-ID: 
                              
                              > Open questions: > > - If an encoding is specified, should file.read() then > always return Unicode objects? > > - If an encoding is specified, should file.write() only > accept Unicode objects and not bytestrings? > > - Is the encoding attribute mutable? (I would prefer not, > but then how to apply an encoding to sys.stdout?) Right now, codecs.open returns an instance of codecs.StreamReaderWriter, not a native file object. It has methods that look like the ones on a file, but they tpically accept or return Unicode strings instead of binary ones. This feels right to me and is what Java does; if you want to switch encoding on sys.stdout, you are not really doing anything to the file object, just switching the wrapper you use. There is much discussion on the i18n sig about 'unifying' binary and Unicode strings at the moment. > Side question: i noticed that the Lib/encodings directory supports > quite a few code pages, including Greek, Russian, but there are no > ISO-2022 CJK or JIS codecs. Is this just because no one felt like > writing one, or is there a reason not to include one? It seems to > me it might be nice to include some codecs for the most common CJK > encodings -- that recent note on the popularity of Python in Korea > comes to mind. There have been 3 contributions to Asian codecs on the i18n sig in the last six months (pythoncodecs.sourceforge.net) one C, two J and one K - but some authors are uncomfortable with Python-style licenses. They need tying together into one integrated package with a test suite. After a 5-month-long project which tied me up, I have finally started ooking at this. The general feeling was that the Asian codecs package should be an optional download, but if we can get them fully tested and do some compression magic it would be nice to get them in the box one day. - Andy Robinson From tim.one at home.com Sun Feb 11 10:20:35 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 11 Feb 2001 04:20:35 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: <20010210104342.A20657@thyrsus.com> Message-ID: 
                              
                              [Guido] > - New schema for .pyc magic number? (Eric, Tim) [Eric] > It looked to me like Tim had a good scheme, but he never specified > the latter (integrity-check) part of the header). Heh -- I stopped after the first 4 bytes! Didn't intend to do more (the first 4 are the hardest <0.25 wink>). Was hoping Ping would rework his ideas into the framework /F suggested (next 4 bytes is a timestamp, then a new marshal type containing "everything else"). I doubt that can make it in for 2.1, though, unless someone works intensely on it this week. rules-me-out-as-it's-not-a-crisis-until-2002-ly y'rs - tim From tim.one at home.com Sun Feb 11 10:20:37 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 11 Feb 2001 04:20:37 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: <14980.51791.171007.616771@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              Other issues: + Make "global x" textually following any reference to x (in the same scope) a compile-time error. Unclear whether def f(): global x global x is an error under that rule (i.e., does appearance in a global stmt constitute "a reference"?). Ditto for def f(): global x, x My opinion: declarations aren't references, and redundant declarations don't hurt (so "no, not an error" to both). Change Ref Man accordingly (i.e., this plugs a hole in the *language* defn, it's not just a question of implementation accident du jour anymore). + Spew warning for "import *" and "exec" at function scope, or change Ref Man to spell out when this is and isn't guaranteed to work. Guido appeared to agree with both of those. From mal at lemburg.com Sun Feb 11 15:33:39 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sun, 11 Feb 2001 15:33:39 +0100 Subject: [Python-Dev] .pyc magic (Python 2.1 release schedule) References: 
                              
                              Message-ID: <3A86A2C3.1A64E0B0@lemburg.com> Tim Peters wrote: > > [Guido] > > - New schema for .pyc magic number? (Eric, Tim) > > [Eric] > > It looked to me like Tim had a good scheme, but he never specified > > the latter (integrity-check) part of the header). > > Heh -- I stopped after the first 4 bytes! Didn't intend to do more (the > first 4 are the hardest <0.25 wink>). Was hoping Ping would rework his > ideas into the framework /F suggested (next 4 bytes is a timestamp, then a > new marshal type containing "everything else"). > > I doubt that can make it in for 2.1, though, unless someone works intensely > on it this week. Just a side-note: the flags for e.g. -U ought to also provide a way to store the encoding used by the compiler and perhaps even the compiler version/name. Don't think it's a good idea to put this into 2.1, though, since it needs a PEP :-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mwh21 at cam.ac.uk Sun Feb 11 17:23:25 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 11 Feb 2001 16:23:25 +0000 Subject: [Python-Dev] test_inspect fails again: segfault in compile In-Reply-To: Ka-Ping Yee's message of "Sat, 10 Feb 2001 18:20:30 -0800 (PST)" References: 
                              
                              Message-ID: 
                              
                              Ka-Ping Yee 
                              
                              writes: > After further testing, it seems to come down to this: > > Python 2.1a2 (#22, Feb 10 2001, 16:15:14) > [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 > Type "copyright", "credits" or "license" for more information. > >>> def spam(a, b): pass > ... > >>> def spam(a=3, b): pass > ... > SyntaxError: non-default argument follows default argument > >>> def spam(a=3, b=4): pass > ... > >>> def spam(a, (b,)): pass > ... > >>> def spam(a=3, (b,)): pass > ... > Segmentation fault (core dumped) > > Python 2.1a2 (#22, Feb 10 2001, 16:15:14) > [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 > Type "copyright", "credits" or "license" for more information. > >>> def spam(a=3, (b,)=(4,)): pass > ... > Segmentation fault (core dumped) > Try this: Index: compile.c =================================================================== RCS file: /cvsroot/python/python/dist/src/Python/compile.c,v retrieving revision 2.162 diff -c -r2.162 compile.c *** compile.c 2001/02/09 22:55:26 2.162 --- compile.c 2001/02/11 16:19:02 *************** *** 4629,4635 **** for (j = 0; j <= complex; j++) { c = CHILD(n, j); if (TYPE(c) == COMMA) ! c = CHILD(n, ++j); if (TYPE(CHILD(c, 0)) == LPAR) symtable_params_fplist(st, CHILD(c, 1)); } --- 4629,4637 ---- for (j = 0; j <= complex; j++) { c = CHILD(n, j); if (TYPE(c) == COMMA) ! c = CHILD(n, ++j); ! else if (TYPE(c) == EQUAL) ! c = CHILD(n, j += 3); if (TYPE(CHILD(c, 0)) == LPAR) symtable_params_fplist(st, CHILD(c, 1)); } Clearly there should be a test for this - where? test_extcall isn't really appropriate, but I can't think of a better place. Maybe it should be renamed to test_funcall.py and then a test for this can go in. Cheers, M. -- Some people say that a monkey would bang out the complete works of Shakespeare on a typewriter give an unlimited amount of time. In the meantime, what they would probably produce is a valid sendmail configuration file. -- Nicholas Petreley From thomas at xs4all.net Sun Feb 11 23:12:36 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sun, 11 Feb 2001 23:12:36 +0100 Subject: [Python-Dev] dl module In-Reply-To: 
                              
                              ; from akuchlin@mems-exchange.org on Fri, Feb 09, 2001 at 02:35:26PM -0500 References: 
                              
                              Message-ID: <20010211231236.A4924@xs4all.nl> On Fri, Feb 09, 2001 at 02:35:26PM -0500, Andrew Kuchling wrote: > The dl module isn't automatically compiled by setup.py, and at least > one patch on SourceForge adds it. > Question: should it be compiled as a standard module? Using it can, > according to the comments, cause core dumps if you're not careful. -1. The dl module is not just crashy, it's also potentially dangerous. And the chance of the setup.py attempt to add it working on most platforms is low at best -- 'manual' dynamic linking is about as portable as threads ;-P -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Mon Feb 12 01:08:37 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 11 Feb 2001 19:08:37 -0500 Subject: [Python-Dev] Cool link Message-ID: 
                              
                              Mentioned on c.l.py: http://cseng.aw.com/book/related/0,3833,0805311912+20,00.html This is the full text of "Advanced Programming Language Design", available online a chapter at a time in PDF format. Chapter 2 (Control Structures) has a nice intro to coroutines in Simula and iterators in CLU, including a funky implementation of the latter via C macros that assumes you can get away with longjmp()'ing "up the stack" (i.e., jumping back into a routine that has already been longjmp()'ed out of). Also an intro to continuations in Io: CLU iterators are truly elegant. They are clear and expressive. They provide a single, uniform way to program all loops. They can be implemented efficiently on a single stack. ... Io continuations provide a lot of food for thought. They spring from an attempt to gain utter simplicity in a programming language. They seem to be quite expressive, but they suffer from a lack of clarity. No matter how often I have stared at the examples of Io programming, I have always had to resort to traces to figure out what is happening. I think they are just too obscure to ever be valuable. Of course in the handful of other languages that support them, continuations are a wizard-level implementation hook for building nicer abstractions. In Io you can't even write a loop without manipulating continuations explicitly. takes-all-kinds-ly y'rs - tim From thomas at xs4all.net Mon Feb 12 01:42:52 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Mon, 12 Feb 2001 01:42:52 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src Makefile.pre.in,1.14,1.15 In-Reply-To: 
                              
                              ; from jhylton@users.sourceforge.net on Fri, Feb 09, 2001 at 02:22:20PM -0800 References: 
                              
                              Message-ID: <20010212014251.B4924@xs4all.nl> On Fri, Feb 09, 2001 at 02:22:20PM -0800, Jeremy Hylton wrote: > Log Message: > Relax the rules for using 'from ... import *' and exec in the presence > of nested functions. Either is allowed in a function if it contains > no defs or lambdas or the defs and lambdas it contains have no free > variables. If a function is itself nested and has free variables, > either is illegal. Wow. Thank you, Jeremy, I'm very happy with that! It's even better than I dared hope for, since it means *most* lambdas (the simple ones that don't reference globals) won't break functions using 'from .. import *', and the ones that do reference globals can be fixed by doing 'global_var=global_var' in the lambda argument list ( -- maybe we should put that in the docs ?) +1-on-suffering-fools-a-whole-release-before-punishing-them-for-it-ly y'rs, -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From greg at cosc.canterbury.ac.nz Mon Feb 12 02:05:54 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Mon, 12 Feb 2001 14:05:54 +1300 (NZDT) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src Makefile.pre.in,1.14,1.15 In-Reply-To: <20010212014251.B4924@xs4all.nl> Message-ID: <200102120105.OAA05106@s454.cosc.canterbury.ac.nz> Jeremy Hylton: > Relax the rules for using 'from ... import *' and exec in the presence > of nested functions. Either is allowed in a function if it contains > no defs or lambdas or the defs and lambdas it contains have no free > variables. Seems to me the rules could be relaxed even further than that. Simply say that if an exec or import-* introduces any new names into an intermediate scope, then tough luck, they won't be visible to any nested functions. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Mon Feb 12 05:58:48 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 11 Feb 2001 23:58:48 -0500 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <14976.5900.472169.467422@nem-srvr.stsci.edu> Message-ID: 
                              
                              [Paul Barrett] > ... > I think people are moving to 2.0, but not at the rate of > keeping-up with the current release cycle. It varies by individual. > By the time 2/3 of them have installed 2.0, 2.1 will be released. No idea. Perhaps it's 60%, perhaps 90%, perhaps 10% -- we have no way to tell. FWIW, we almost never see a bug report against 1.5.2 anymore, and bug reports are about the only hard feedback we get. > So what's the point of installing 2.0, when a few weeks later, > you have to install 2.1? Overlooking that you don't have to install anything, the point also varies by individual, from new-feature envy to finally getting some 1.5.2 bug off your back. > The situation at our institution is a good indicator of this: 2.0 > becomes the default this week. Despite you challenging them with "what's the point?" 
                              
                              ? Your organization's adoption schedule need not have anything in common with Python's release schedule, and it sounds like your organization moves slowly enough that you may want to skip 2.1 and wait for 2.2. Fine by me! Do you see harm in that? It's not like we're counting on upgrade fees to fund the next round of development. From guido at digicool.com Mon Feb 12 15:53:30 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 09:53:30 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: Your message of "Sun, 11 Feb 2001 04:20:37 EST." 
                              
                              References: 
                              
                              Message-ID: <200102121453.JAA06774@cj20424-a.reston1.va.home.com> > Other issues: > > + Make "global x" textually following any reference to x (in the > same scope) a compile-time error. Unclear whether > > def f(): > global x > global x > > is an error under that rule (i.e., does appearance in a global > stmt constitute "a reference"?). Ditto for > > def f(): > global x, x > > My opinion: declarations aren't references, and redundant > declarations don't hurt (so "no, not an error" to both). > > Change Ref Man accordingly (i.e., this plugs a hole in the > *language* defn, it's not just a question of implementation > accident du jour anymore). Agreed. > + Spew warning for "import *" and "exec" at function scope, or > change Ref Man to spell out when this is and isn't guaranteed > to work. Ah, yes! A warning! That would be great! > Guido appeared to agree with both of those. Can't recall when we discussed these, but yes, after some introspection I still appear to agree. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 12 15:59:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 09:59:11 -0500 Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: Your message of "Sat, 10 Feb 2001 23:43:37 +0100." <3A85C419.99EDCF14@lemburg.com> References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> <010f01c09361$8ff82910$e46940d5@hagrid> <200102101403.JAA27043@cj20424-a.reston1.va.home.com> <3A854EA6.B8A8F7E2@lemburg.com> <200102101949.OAA28167@cj20424-a.reston1.va.home.com> <3A85C419.99EDCF14@lemburg.com> Message-ID: <200102121459.JAA06804@cj20424-a.reston1.va.home.com> > > > Ok, I'll postpone this for 2.2 then... don't want to disappoint > > > our BDFL ;-) > > > > The alternative would be for you to summarize why the proposed change > > can't possibly break code, this late in the 2.1 release game. :-) > > Well, the only code it could possibly break is code which > > 1. expects a unique string object as argument What does this mean? Code that checks whether its argument "is" a well-known string? > 2. uses the s# parser marker and is used with an Unicode object > containing non-ASCII characters > > Unfortunately, I'm not sure about how much code is out there > which assumes 1. cStringIO.c is one example and given its > heritage, there probably is a lot more in the Zope camp ;-) I still don't have a clear idea of what changes you propose, but I'm confident we'll get to that after 2.1 is release. :-) > > > Perhaps we should rethink the whole complicated printing machinery > > > in Python for 2.2 and come up with a more generic solution to the > > > problem of letting to-be-printed objects pass through to the > > > stream objects ?! > > > > Yes, please! I'd love it if you could write up a PEP that analyzes > > the issues and proposes a solution. (Without an analysis of the > > issues, there's not much point in proposing a solution, IMO.) > > Ok... on the plane to the conference, maybe. That's cool. It's amazing how much email a face-to-face meeting can be worth! > > > Note that this is needed in order to be able to redirect sys.stdout > > > to a codec which then converts Unicode to some external encoding. > > > Currently this is not possible due to the implicit str() call in > > > PyObject_Print(). > > > > Excellent. I agree that it's a shame that Unicode I/O is so hard at > > the moment. > > Since this is what we're after here, we might as well consider > possibilities to get the input side of things equally in line > with the codec idea, e.g. what would happen if .read() returns > a Unicode object ? That seems much less problematic, since there are no system APIs that need to be changed. Code that can deal with Unicode will be happy. Other code may break. Ideally, code that doesn't know how to deal with Unicode won't break if the Unicode-encoded input in fact only contains ASCII. --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Mon Feb 12 16:33:00 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 12 Feb 2001 10:33:00 -0500 (EST) Subject: [Python-Dev] Re: Fatal scoping error from the twilight zone In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <14984.556.138950.289857@w221.z064000254.bwi-md.dsl.cnc.net> I can reproduce the problem, but I think the only solution is to add a section to the ref manual explaining that only the letters a, e, f, i, m, n, q, u, v, and y are legal in that position. In other words, I'm still trying to figure out what is happening. Jeremy From jeremy at alum.mit.edu Mon Feb 12 17:01:59 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 12 Feb 2001 11:01:59 -0500 (EST) Subject: [Python-Dev] Re: Fatal scoping error from the twilight zone In-Reply-To: <14984.556.138950.289857@w221.z064000254.bwi-md.dsl.cnc.net> References: 
                              
                              <14984.556.138950.289857@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14984.2295.460544.871532@w221.z064000254.bwi-md.dsl.cnc.net> The bug was easy to fix after all. I figured the problem had to be related to dictionary traversal, because that was the only sensible explanation for why the specific letter mattered; different letters have different hash values, so the dictionary ends up storing names in a different order. The problem, fixed in rev. 2.163 of compile.c, was caused by iterating over a dictionary using PyDict_Next() and updating it at the same time. The updates are now deferred until the iteration is done. Jeremy From guido at digicool.com Mon Feb 12 17:12:41 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 11:12:41 -0500 Subject: [Python-Dev] Re: Fatal scoping error from the twilight zone In-Reply-To: Your message of "Mon, 12 Feb 2001 11:01:59 EST." <14984.2295.460544.871532@w221.z064000254.bwi-md.dsl.cnc.net> References: 
                              
                              <14984.556.138950.289857@w221.z064000254.bwi-md.dsl.cnc.net> <14984.2295.460544.871532@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102121612.LAA07332@cj20424-a.reston1.va.home.com> > The problem, fixed in rev. 2.163 of compile.c, was caused by iterating > over a dictionary using PyDict_Next() and updating it at the same > time. The updates are now deferred until the iteration is done. Ha! This is excellent anecdotal evidence that "for key in dict", if we ever introduce it, should disallow updates of the dict while in the loop! --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin at cnri.reston.va.us Mon Feb 12 17:28:08 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 12 Feb 2001 11:28:08 -0500 Subject: [Python-Dev] Cool link In-Reply-To: 
                              
                              ; from tim.one@home.com on Sun, Feb 11, 2001 at 07:08:37PM -0500 References: 
                              
                              Message-ID: <20010212112808.C3637@thrak.cnri.reston.va.us> On Sun, Feb 11, 2001 at 07:08:37PM -0500, Tim Peters wrote: >are a wizard-level implementation hook for building nicer abstractions. In >Io you can't even write a loop without manipulating continuations >explicitly. Note that, as Finkel mentions somewhere near the end of the book, Io was never actually implemented. (The linked list example is certainly head-exploding, I must say...) --amk From gvwilson at ca.baltimore.com Mon Feb 12 17:46:18 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 12 Feb 2001 11:46:18 -0500 Subject: [Python-Dev] Set and Iterator BOFs Message-ID: <000901c09513$52ade820$770a0a0a@nevex.com> Barbara Fuller at Foretec has set up two mailing lists: Iterator-BOF at python9.org (for March 6) Set-BOF at python9.org (for March 7) for discussing admin related to these BOFs. If you are planning to attend, please send mail to the list, so that she can plan room allocation, make sure we get seated first for lunch, etc. Greg From guido at digicool.com Mon Feb 12 17:57:35 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 11:57:35 -0500 Subject: [Python-Dev] Set and Iterator BOFs In-Reply-To: Your message of "Mon, 12 Feb 2001 11:46:18 EST." <000901c09513$52ade820$770a0a0a@nevex.com> References: <000901c09513$52ade820$770a0a0a@nevex.com> Message-ID: <200102121657.LAA07606@cj20424-a.reston1.va.home.com> > Barbara Fuller at Foretec has set up two mailing lists: > > Iterator-BOF at python9.org (for March 6) > Set-BOF at python9.org (for March 7) > > for discussing admin related to these BOFs. If you are > planning to attend, please send mail to the list, so that > she can plan room allocation, make sure we get seated first > for lunch, etc. Assuming these aren't mailman lists, how does one subscribe? Or are these just aliases that go to a fixed recipient (e.g. you or Barbara)? --Guido van Rossum (home page: http://www.python.org/~guido/) From gvwilson at ca.baltimore.com Mon Feb 12 18:14:02 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 12 Feb 2001 12:14:02 -0500 Subject: [Python-Dev] re: cool link In-Reply-To: 
                              
                              Message-ID: <000b01c09517$3283f8b0$770a0a0a@nevex.com> > From: "Tim Peters" 
                              
                              > > Mentioned on c.l.py: > > http://cseng.aw.com/book/related/0,3833,0805311912+20,00.html > > This is the full text of "Advanced Programming Language > Design", available online a chapter at a time in PDF format. Greg Wilson: From gvwilson at ca.baltimore.com Mon Feb 12 18:17:07 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 12 Feb 2001 12:17:07 -0500 Subject: [Python-Dev] re: Set and Iterator BOFs In-Reply-To: 
                              
                              Message-ID: <000c01c09517$a0f8f2f0$770a0a0a@nevex.com> > > Greg Wilson > > Barbara Fuller at Foretec has set up two mailing lists: > > > > Iterator-BOF at python9.org (for March 6) > > Set-BOF at python9.org (for March 7) > > > > for discussing admin related to these BOFs. > Guido van Rossum: > Assuming these aren't mailman lists, how does one subscribe? Or are > these just aliases that go to a fixed recipient (e.g. you or Barbara)? The latter --- these are for Barbara's convenience, so that she can get a feel for how many people will need to be hustled through lunch. Thanks, Greg p.s. I have set up http://groups.yahoo.com/group/python-iter and http://groups.yahoo.com/group/python-sets; Guido, would you prefer discussion of sets and iterators to be moved to these lists, or to stay on python-dev? From guido at digicool.com Mon Feb 12 18:24:32 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 12:24:32 -0500 Subject: [Python-Dev] re: Set and Iterator BOFs In-Reply-To: Your message of "Mon, 12 Feb 2001 12:17:07 EST." <000c01c09517$a0f8f2f0$770a0a0a@nevex.com> References: <000c01c09517$a0f8f2f0$770a0a0a@nevex.com> Message-ID: <200102121724.MAA07893@cj20424-a.reston1.va.home.com> > p.s. I have set up http://groups.yahoo.com/group/python-iter and > http://groups.yahoo.com/group/python-sets; Guido, would you prefer > discussion of sets and iterators to be moved to these lists, or to > stay on python-dev? Let's move these to egroups for now. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Mon Feb 12 22:01:07 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 12 Feb 2001 16:01:07 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: <200102121453.JAA06774@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Guido, on making "global x" an error sometimes, and warning on "import * / exec" sometimes ] > Can't recall when we discussed these, but yes, after some > introspection I still appear to agree. Heh heh. Herewith your entire half of the discussion 
                              
                              : From: guido at cj20424-a.reston1.va.home.com Sent: Friday, February 09, 2001 3:12 PM To: Tim Peters Cc: Jeremy Hylton Subject: Re: [Python-Dev] RE: global, was Re: None assigment Agreed. --Guido van Rossum (home page: http://www.python.org/~guido/) This probably wasn't enough detail for Jeremy to act on, but was enough for me to complete channeling you 
                              
                              . The tail end of the msg to which you replied was: +1 on making this ["global x" sometimes] an error now. And if 2.1 is relaxed to again allow "import *" at function scope in some cases, either that should at least raise a warning, or the Ref Man should be changed to say that's a defined use of the language. not-often-you-see-5-quoted-lines-each-begin-with-a-2-character- thing-ly y'rs - tim From akuchlin at mems-exchange.org Mon Feb 12 22:26:42 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Mon, 12 Feb 2001 16:26:42 -0500 Subject: [Python-Dev] Unit testing (again) Message-ID: 
                              
                              I was pleased to see that the 2.1 release schedule lists "unit testing" as one of the open issues. How is this going to be decided? Voting? BDFL fiat? --amk From guido at digicool.com Mon Feb 12 22:37:00 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 16:37:00 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: Your message of "Mon, 12 Feb 2001 16:26:42 EST." 
                              
                              References: 
                              
                              Message-ID: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> > I was pleased to see that the 2.1 release schedule lists "unit > testing" as one of the open issues. How is this going to be decided? > Voting? BDFL fiat? BDFL fiat: most likely we'll be integrating PyUnit, whose author thinks this is a great idea. We'll be extending it to reduce the amount of boilerplate you have to type for new tests, and to optionally support the style of testing that Quixote's unit test package favors. This style (where the tests are given as string literals) seems to be really impopular with most people I've spoken to, but we're going to support it anyhow because there may also be cases where it's appropriate. I'm not sure however how much we'll get done for 2.1; maybe we'll just integrate the current PyUnit CVS tree. --Guido van Rossum (home page: http://www.python.org/~guido/) From tismer at tismer.com Mon Feb 12 22:48:58 2001 From: tismer at tismer.com (Christian Tismer) Date: Mon, 12 Feb 2001 22:48:58 +0100 Subject: [Python-Dev] Cool link References: 
                              
                              Message-ID: <3A885A4A.E1AB42FF@tismer.com> Tim Peters wrote: > > Mentioned on c.l.py: > > http://cseng.aw.com/book/related/0,3833,0805311912+20,00.html > > This is the full text of "Advanced Programming Language Design", available > online a chapter at a time in PDF format. > > Chapter 2 (Control Structures) has a nice intro to coroutines in Simula and > iterators in CLU, including a funky implementation of the latter via C > macros that assumes you can get away with longjmp()'ing "up the stack" > (i.e., jumping back into a routine that has already been longjmp()'ed out > of). Also an intro to continuations in Io: > > CLU iterators are truly elegant. They are clear and expressive. > They provide a single, uniform way to program all loops. They > can be implemented efficiently on a single stack. > ... > Io continuations provide a lot of food for thought. They spring > from an attempt to gain utter simplicity in a programming > language. They seem to be quite expressive, but they suffer > from a lack of clarity. No matter how often I have stared at > the examples of Io programming, I have always had to resort to > traces to figure out what is happening. I think they are just > too obscure to ever be valuable. Yes, this is a nice and readable text. But, the latter paragraph shows that the author is able to spell continuations without understanding them. Well, probably he does understand them, but his readers don't. At least this paragraph shows that he has an idea: """ Given that continuations are very powerful, why are they not a part of ev-ery language? Why do they not replace the conventional mechanisms of con-trol structure? First, continuations are extremely confusing. The examples given in this section are almost impossible to understand without tracing, and even then, the general flow of control is lost in the details of procedure calls and parameter passing. With experience, programmers might become comfortable with them; however, continuations are so similar to gotos (with the added complexity of parameters) that they make it difficult to structure programs. """ I could understand the examples without tracing, and they were by no means confusing, but very clear. I believe the above message coming from a stack-educated brain (as we almost are) which is about to get the point, but still is not there. > Of course in the handful of other languages that support them, continuations > are a wizard-level implementation hook for building nicer abstractions. In > Io you can't even write a loop without manipulating continuations > explicitly. What is your message? Do you want me to react? Well, here the expected reaction, nothing new. I already have given up pushing continuations for Python; not because continuations are wrong, but too powerful for most needs and too simple (read "obscure") for most programmers. I will provide native implementations of coroutines & co in one or two months (sponsored work), and continuation support will be conditionally compiled into Stackless. I still regard them useful for education (Raphael Finkel would argue differently after playing with Python continuations), but their support should not go into the Python standard. I'm currently splitting the compromizes in ceval.c by being continuation related or not. My claim that this makes up 10 percent of the code or less seems to hold. chewing-on-the-red-herring-ly y'rs - chris -- Christian Tismer :^) 
                              
                              Mission Impossible 5oftware : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF where do you want to jump today? http://www.stackless.com From Jason.Tishler at dothill.com Mon Feb 12 23:08:39 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Mon, 12 Feb 2001 17:08:39 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: 
                              
                              ; from tim.one@home.com on Mon, Feb 05, 2001 at 04:01:49PM -0500 References: <20010205122721.J812@dothill.com> 
                              
                              Message-ID: <20010212170839.F281@dothill.com> [Sorry for letting this thread hang, but I'm back from paternity leave so I will be more responsive now. Well, at least between normal business hours that is.] On Mon, Feb 05, 2001 at 04:01:49PM -0500, Tim Peters wrote: > Basic sanity requires that Python do the same > thing on *all* case-insensitive case-preserving filesystems, to the fullest > extent possible. Python's DOS/Windows behavior has priority by a decade. > I'm deadly opposed to making a special wart for Cygwin (or the Mac), but am > in favor of changing it on Windows too. May be if we can agree on how import should behave, then we will have a better chance of determining the best way to implement it sans warts? So, along these lines I propose that import from a file behave the same on both case-sensitive and case-insensitive/case-preserving filesystems. This will help to maximize portability between platforms like UNIX, Windows, and Mac. Unfortunately, something like the PYTHONCASEOK caveat still needs to be preserved for case-destroying filesystems. Any feedback is appreciated -- I'm just trying to help get closure on this issue by Beta 1. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From akuchlin at cnri.reston.va.us Mon Feb 12 23:18:00 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 12 Feb 2001 17:18:00 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102122137.QAA09818@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 12, 2001 at 04:37:00PM -0500 References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> Message-ID: <20010212171800.D3900@thrak.cnri.reston.va.us> On Mon, Feb 12, 2001 at 04:37:00PM -0500, Guido van Rossum wrote: >I'm not sure however how much we'll get done for 2.1; maybe we'll just >integrate the current PyUnit CVS tree. I'd really like to have unit testing in 2.1 that I can actually use. PyUnit as it stands is clunky enough that I'd still use the Quixote framework in my code; the advantage of being included with Python would not overcome its disadvantages for me. Have you got a list of desired changes? And should the changes be discussed on python-dev or the PyUnit list? --amk From guido at digicool.com Mon Feb 12 23:21:14 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 17:21:14 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: Your message of "Mon, 12 Feb 2001 17:18:00 EST." <20010212171800.D3900@thrak.cnri.reston.va.us> References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> Message-ID: <200102122221.RAA11205@cj20424-a.reston1.va.home.com> > I'd really like to have unit testing in 2.1 that I can actually use. > PyUnit as it stands is clunky enough that I'd still use the Quixote > framework in my code; the advantage of being included with Python > would not overcome its disadvantages for me. Have you got a list of > desired changes? And should the changes be discussed on python-dev or > the PyUnit list? I'm just reporting what I've heard on our group meetings. Fred Drake and Jeremy Hylton are in charge of getting this done. You can catch their ear on python-dev; I'm not sure about the PyUnit list. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 12 23:23:21 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 17:23:21 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: Your message of "Mon, 12 Feb 2001 17:08:39 EST." <20010212170839.F281@dothill.com> References: <20010205122721.J812@dothill.com> 
                              
                              <20010212170839.F281@dothill.com> Message-ID: <200102122223.RAA11224@cj20424-a.reston1.va.home.com> > [Sorry for letting this thread hang, but I'm back from paternity leave > so I will be more responsive now. Well, at least between normal business > hours that is.] > > On Mon, Feb 05, 2001 at 04:01:49PM -0500, Tim Peters wrote: > > Basic sanity requires that Python do the same > > thing on *all* case-insensitive case-preserving filesystems, to the fullest > > extent possible. Python's DOS/Windows behavior has priority by a decade. > > I'm deadly opposed to making a special wart for Cygwin (or the Mac), but am > > in favor of changing it on Windows too. > > May be if we can agree on how import should behave, then we will have > a better chance of determining the best way to implement it sans warts? > So, along these lines I propose that import from a file behave the same > on both case-sensitive and case-insensitive/case-preserving filesystems. > This will help to maximize portability between platforms like UNIX, > Windows, and Mac. Unfortunately, something like the PYTHONCASEOK > caveat still needs to be preserved for case-destroying filesystems. > > Any feedback is appreciated -- I'm just trying to help get closure on > this issue by Beta 1. Tim has convinced me that the proper rules are simple: - If PYTHONCASEOK is set, use the first file found with a case-insensitive match. - If PYTHONCASEOK is not set, and the file system is case-preserving, ignore (rather than bail out at) files that don't have the proper case. Tim is in charge of cleaning up the code, but he'll need help for the Cygwin and MacOSX parts. --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Mon Feb 12 22:59:06 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 12 Feb 2001 16:59:06 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102122221.RAA11205@cj20424-a.reston1.va.home.com> References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> Message-ID: <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum 
                              
                              writes: [Andrew writes:] >> I'd really like to have unit testing in 2.1 that I can actually >> use. PyUnit as it stands is clunky enough that I'd still use the >> Quixote framework in my code; the advantage of being included >> with Python would not overcome its disadvantages for me. Have >> you got a list of desired changes? And should the changes be >> discussed on python-dev or the PyUnit list? GvR> I'm just reporting what I've heard on our group meetings. Fred GvR> Drake and Jeremy Hylton are in charge of getting this done. GvR> You can catch their ear on python-dev; I'm not sure about the GvR> PyUnit list. I'm happy to discuss on either venue, or to hash it in private email. What specific features do you need? Perhaps Steve will be interested in including them in PyUnit. Jeremy From akuchlin at cnri.reston.va.us Tue Feb 13 00:10:10 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 12 Feb 2001 18:10:10 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Feb 12, 2001 at 04:59:06PM -0500 References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010212181010.A4267@thrak.cnri.reston.va.us> On Mon, Feb 12, 2001 at 04:59:06PM -0500, Jeremy Hylton wrote: >I'm happy to discuss on either venue, or to hash it in private email. >What specific features do you need? Perhaps Steve will be interested >in including them in PyUnit. * Useful shorthands for common asserts (testing that two sequences are the same ignoring order, for example) * A way to write test cases that doesn't bring the test method to a halt if something raises an unexpected exception * Coverage support (though that would also entail Skip's coverage code getting into 2.1) --amk From jeremy at alum.mit.edu Tue Feb 13 00:16:19 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 12 Feb 2001 18:16:19 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010212181010.A4267@thrak.cnri.reston.va.us> References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> Message-ID: <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "AMK" == Andrew Kuchling 
                              
                              writes: AMK> On Mon, Feb 12, 2001 at 04:59:06PM -0500, Jeremy Hylton wrote: >> I'm happy to discuss on either venue, or to hash it in private >> email. What specific features do you need? Perhaps Steve will >> be interested in including them in PyUnit. AMK> * Useful shorthands for common asserts (testing that two AMK> sequences are the same ignoring order, for example) We can write a collection of helper functions for this, right? self.verify(sequenceElementsThatSame(l1, l2)) AMK> * A way to write test cases that doesn't bring the test method AMK> to a halt if something raises an unexpected exception I'm not sure how to achieve this or why you would want the test to continue. I know that Quixote uses test cases in strings, but it's the thing I like the least about Quixote unittest. Can you think of an alternate mechanism? Maybe I'd be less opposed if I could understand why it's desirable to continue executing a method where something has already failed unexpectedly. After the first exception, something is broken and needs to be fixed, regardless of whether subsequent lines of code work. AMK> * Coverage support (though that would also entail Skip's AMK> coverage code getting into 2.1) Shouldn't be hard. Skip's coverage code was in 2.0; we might need to move it from Tools/script to the library, though. Jeremy From tim.one at home.com Tue Feb 13 01:14:51 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 12 Feb 2001 19:14:51 -0500 Subject: [Python-Dev] Cool link In-Reply-To: <3A885A4A.E1AB42FF@tismer.com> Message-ID: 
                              
                              [Christian Tismer] > ... > What is your message? Do you want me to react? I had no msg other than to share a cool link I thought people here would find interesting. While Greg Wilson, e.g., complained about the C macro implementation of CLU iterators in his review, that's exactly the kind of thing that should be *interesting* to Python-Dev'ers: a long and gentle explanation of an actual implementation. I expect that most people here still have no clear idea how generators (let alone continuations) can be implemented, or why they'd be useful. Here's a function to compute the number of distinct unlabelled binary trees with n nodes (these are the so-called Catalan numbers -- the book didn't mention that): cache = {0: 1} def count(n): val = cache.get(n, 0) if val: return val for leftsize in range(n): val += count(leftsize) * count(n-1 - leftsize) cache[n] = val return val Here's one to generate all the distinct unlabelled binary trees with n nodes: def genbin(n): if n == 0: return [()] result = [] for leftsize in range(n): for left in genbin(leftsize): for right in genbin(n-1 - leftsize): result.append((left, right)) return result For even rather small values of n, genbin(n) builds lists of impractical size. Trying to build a return-one-at-a-time iterator form of genbin() today is surprisingly difficult. In CLU or Icon, you just throw away the "result = []" and "return result" lines, and replace result.append with "suspend" (Icon) or "yield" (CLU). Exactly the same kind of algorithm is needed to generate all ways of parenthesizing an n-term expression. I did that in ABC once, in a successful attempt to prove via exhaustion that raise-complex-to-integer-power in the presence of signed zeroes is ill-defined under IEEE-754 arithmetic rules. While nobody here cares about that, the 754 committee took it seriously indeed. For me, I'm still just trying to get Python to address all the things I found unbearable in ABC <0.7 wink>. so-if-there's-a-msg-here-it's-unique-to-me-ly y'rs - tim From michel at digicool.com Tue Feb 13 03:06:25 2001 From: michel at digicool.com (Michel Pelletier) Date: Mon, 12 Feb 2001 18:06:25 -0800 (PST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010212181010.A4267@thrak.cnri.reston.va.us> Message-ID: 
                              
                              On Mon, 12 Feb 2001, Andrew Kuchling wrote: > * A way to write test cases that doesn't bring the test method to a halt if > something raises an unexpected exception I'm not sure what you mean by this, but Jim F. recently sent this email around internally: """ Unit tests are cool. One problem is that after you find a problem, it's hard to debug it, because unittest catches the exceptions. I added debug methods to TestCase and TestSuite so that you can run your tests under a debugger. When you are ready to debug a test failure, just call debug() on your test suite or case under debugger control. I checked this change into our CVS and send the auther of PyUnit a message. Jim """ I don't think it adressed your comment, but it is an interesting related feature. -Michel From tim.one at home.com Tue Feb 13 03:05:51 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 12 Feb 2001 21:05:51 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102122221.RAA11205@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              Note that doctest.py is part of the 2.1 std library. If you've never used it, pretend I didn't tell you that, and look at the new std library module difflib.py. Would you even guess there *are* unit tests in there? Here's the full text of the new std test test_difflib.py: import doctest, difflib doctest.testmod(difflib, verbose=1) I will immodestly claim that if doctest is sufficient for your testing purposes, you're never going to find anything easier or faster or more natural to use (and, yes, if an unexpected exception is raised, it doesn't stop the rest of the tests from running -- it's in the very nature of "unit tests" that an error in one unit should not prevent other unit tests from running). practicing-for-a-marketing-career-ly y'rs - tim From Jason.Tishler at dothill.com Tue Feb 13 04:36:38 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Mon, 12 Feb 2001 22:36:38 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <200102122223.RAA11224@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 12, 2001 at 05:23:21PM -0500 References: <20010205122721.J812@dothill.com> 
                              
                              <20010212170839.F281@dothill.com> <200102122223.RAA11224@cj20424-a.reston1.va.home.com> Message-ID: <20010212223638.A228@dothill.com> Tim, On Mon, Feb 12, 2001 at 05:23:21PM -0500, Guido van Rossum wrote: > Tim is in charge of cleaning up the code, but he'll need help for the > Cygwin and MacOSX parts. I'm willing to help develop, test, etc. the Cygwin stuff. Just let me know how I can assist you. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From akuchlin at cnri.reston.va.us Tue Feb 13 04:52:23 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 12 Feb 2001 22:52:23 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Feb 12, 2001 at 06:16:19PM -0500 References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010212225223.B21640@newcnri.cnri.reston.va.us> On Mon, Feb 12, 2001 at 06:16:19PM -0500, Jeremy Hylton wrote: >We can write a collection of helper functions for this, right? > self.verify(sequenceElementsThatSame(l1, l2)) Pretty much; nothing too difficult. >Maybe I'd be less opposed if I could understand why it's desirable to >continue executing a method where something has already failed >unexpectedly. After the first exception, something is broken and In this style of unit test, you have setup() and shutdown() methods that create and destroy the test objects afresh for each test case, so cases aren't big long skeins of assertions that will all break given a single failure. Instead they're more like 1) call a method that changes an object's state, 2) call accessors or get attributes to check invariants are what you expect. It can be useful to know that .get_parameter_value() raises an exception while .get_parameter_type() doesn't, or whatever. --amk From chrism at digicool.com Tue Feb 13 06:29:01 2001 From: chrism at digicool.com (Chris McDonough) Date: Tue, 13 Feb 2001 00:29:01 -0500 Subject: [Python-Dev] Unit testing (again) References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> <20010212225223.B21640@newcnri.cnri.reston.va.us> Message-ID: <025e01c0957d$e9c66d80$0e01a8c0@kurtz> Andrew, Here's a sample of PyUnit stuff that I think illustrates what you're asking for... from unittest import TestCase, makeSuite, TextTestRunner class Test(TestCase): def setUp(self): self.t = {2:2} def tearDown(self): del self.t def testGetItemFails(self): self.assertRaises(KeyError, self._getitemfail) def _getitemfail(self): return self.t[1] def testGetItemSucceeds(self): assert self.t[2] == 2 def main(): suite = makeSuite(Test, 'test') runner = TextTestRunner() runner.run(suite) if __name__ == '__main__': main() Execution happens like this: call setUp() call testGetItemFails() print test results call tearDown() call setUp() call testGetItemSucceeds() print test results call tearDown() end Isn't this almost exactly what you want? Or am I completely missing the point? ----- Original Message ----- From: "Andrew Kuchling" 
                              
                              To: 
                              
                              Sent: Monday, February 12, 2001 10:52 PM Subject: Re: [Python-Dev] Unit testing (again) > On Mon, Feb 12, 2001 at 06:16:19PM -0500, Jeremy Hylton wrote: > >We can write a collection of helper functions for this, right? > > self.verify(sequenceElementsThatSame(l1, l2)) > > Pretty much; nothing too difficult. > > >Maybe I'd be less opposed if I could understand why it's desirable to > >continue executing a method where something has already failed > >unexpectedly. After the first exception, something is broken and > > In this style of unit test, you have setup() and shutdown() methods that > create and destroy the test objects afresh for each test case, so cases > aren't big long skeins of assertions that will all break given a single > failure. Instead they're more like 1) call a method that changes an > object's state, 2) call accessors or get attributes to check invariants are > what you expect. It can be useful to know that .get_parameter_value() > raises an exception while .get_parameter_type() doesn't, or whatever. > > --amk > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > From tim.one at home.com Tue Feb 13 06:34:23 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 00:34:23 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <20010212223638.A228@dothill.com> Message-ID: 
                              
                              [Jason Tishler] > I'm willing to help develop, test, etc. the Cygwin stuff. Just let me > know how I can assist you. Jason, doesn't the current CVS Python already do what you want? I thought that was the case, due to the HAVE_DIRENT_H #ifdef'ery Steven introduced. If not, scream at me. My intent is to get rid of the HAVE_DIRENT_H #ifdef *test*, but not the code therein, and add new versions of MatchFilename that work for systems (like regular old Windows) that don't support opendir() natively. I didn't think Cygwin needed that -- scream if that's wrong. However, even if you are happy with that (& I won't hurt it), sooner or later you're going to try accessing a case-destroying network filesystem from Cygwin, so I believe you need more code to honor PYTHONCASEOK too (it's the only chance anyone has in the face of a case-destroying system). Luckily, with a new child in the house, you have plenty of time to think about this, since you won't be sleeping again for about 3 years anyway 
                              
                              . From pf at artcom-gmbh.de Tue Feb 13 08:17:03 2001 From: pf at artcom-gmbh.de (Peter Funk) Date: Tue, 13 Feb 2001 08:17:03 +0100 (MET) Subject: doctest and Python 2.1 (was RE: [Python-Dev] Unit testing (again)) In-Reply-To: 
                              
                              from Tim Peters at "Feb 12, 2001 9: 5:51 pm" Message-ID: 
                              
                              Hi, Tim Peters: > Note that doctest.py is part of the 2.1 std library. If you've never used [...] > I will immodestly claim that if doctest is sufficient for your testing > purposes, you're never going to find anything easier or faster or more > natural to use (and, yes, if an unexpected exception is raised, it doesn't > stop the rest of the tests from running -- it's in the very nature of "unit > tests" that an error in one unit should not prevent other unit tests from > running). > > practicing-for-a-marketing-career-ly y'rs - tim [a satisfied customer reports:] I like doctest very much. I'm using it for our company projects a lot. This is a very valuable tool. However Pings latest changes, which turned 'foobar\012' into 'foobar\n' and '\377\376\345' into '\xff\xfe\xe5' has broken some of the doctests in our software. Since we have to keep our code compatible with Python 1.5.2 for at least one, two or may be three more years, it isn't obvious to me how to fix this. I've spend some thoughts about a patch to doctest fooling the string printing output back to the 1.5.2 behaviour, but didn't get around to it until now. :-( Regards, Peter -- Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260 office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen) From fredrik at effbot.org Tue Feb 13 09:17:58 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 13 Feb 2001 09:17:58 +0100 Subject: [Python-Dev] Unit testing (again) References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com><20010212171800.D3900@thrak.cnri.reston.va.us><200102122221.RAA11205@cj20424-a.reston1.va.home.com><14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net><20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <01c201c09595$7bc09be0$e46940d5@hagrid> Jeremy wrote: > I know that Quixote uses test cases in strings, but it's the thing I > like the least about Quixote unittest like whitespace indentation, it's done that way for a reason. > I'm not sure how to achieve this or why you would want the test to > continue. same reason you want your compiler to report more than just the first error -- so you can see patterns in the test script's behaviour, so you can fix more than one bug at a time, or fix the bugs in an order that suits you and not the framework, etc. (for some of our components, we're using a framework that can continue to run the test even if the tested program dumps core. trust me, that has saved us a lot of time...) > After the first exception, something is broken and needs to be > fixed, regardless of whether subsequent lines of code work. jeremy, that's the kind of comment I would have expected from a manager, not from a programmer who has done lots of testing. Cheers /F From stephen_purcell at yahoo.com Tue Feb 13 09:26:17 2001 From: stephen_purcell at yahoo.com (Steve Purcell) Date: Tue, 13 Feb 2001 09:26:17 +0100 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Feb 12, 2001 at 04:59:06PM -0500 References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010213092617.B5558@freedom.puma-ag.com> Jeremy Hylton wrote: > >>>>> "GvR" == Guido van Rossum 
                              
                              writes: > > [Andrew writes:] > >> I'd really like to have unit testing in 2.1 that I can actually > >> use. PyUnit as it stands is clunky enough that I'd still use the > >> Quixote framework in my code; the advantage of being included > >> with Python would not overcome its disadvantages for me. Have > >> you got a list of desired changes? And should the changes be > >> discussed on python-dev or the PyUnit list? > > GvR> I'm just reporting what I've heard on our group meetings. Fred > GvR> Drake and Jeremy Hylton are in charge of getting this done. > GvR> You can catch their ear on python-dev; I'm not sure about the > GvR> PyUnit list. > > I'm happy to discuss on either venue, or to hash it in private email. > What specific features do you need? Perhaps Steve will be interested > in including them in PyUnit. Fine by private e-mail, though it would be nice if some of the discussions are seen by the PyUnit list because it's a representative community of regular users who probably have a good idea of what makes sense for them. If somebody would like to suggest changes, I can look into how they might get done. Also, I'd love to see what I can do to allay AMK's 'clunkiness' complaints! :-) Best wishes, -Steve -- Steve Purcell, Pythangelist "Life must be simple if *I* can do it" -- me From fredrik at effbot.org Tue Feb 13 10:35:30 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 13 Feb 2001 10:35:30 +0100 Subject: [Python-Dev] Unit testing (again) References: 
                              
                              Message-ID: <002301c095a0$4fe5cc60$e46940d5@hagrid> tim wrote: > I will immodestly claim that if doctest is sufficient for your testing > purposes, you're never going to find anything easier or faster or more > natural to use you know, just having taken another look at doctest and the unit test options, I'm tempted to agree. except for the "if sufficient" part, that is -- given that you can easily run doctest on a test harness instead of the original module, it's *always* sufficient. (cannot allow tim to be 100% correct every time ;-) Cheers /F From guido at digicool.com Tue Feb 13 14:55:29 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 13 Feb 2001 08:55:29 -0500 Subject: doctest and Python 2.1 (was RE: [Python-Dev] Unit testing (again)) In-Reply-To: Your message of "Tue, 13 Feb 2001 08:17:03 +0100." 
                              
                              References: 
                              
                              Message-ID: <200102131355.IAA14403@cj20424-a.reston1.va.home.com> > [a satisfied customer reports:] > I like doctest very much. I'm using it for our company projects a lot. > This is a very valuable tool. > > However Pings latest changes, which turned 'foobar\012' into 'foobar\n' > and '\377\376\345' into '\xff\xfe\xe5' has broken some of the doctests > in our software. > > Since we have to keep our code compatible with Python 1.5.2 for at > least one, two or may be three more years, it isn't obvious to me > how to fix this. This is a general problem with doctest, and a general solution exists. It's the same when you have a function that returns a dictionary: you can't include the dictionary in the output, because the key order isn't guaranteed. So, instead of writing your example like this: >>> foo() {"Hermione": "hippogryph", "Harry": "broomstick"} >>> you write it like this: >>> foo() == {"Hermione": "hippogryph", "Harry": "broomstick"} 1 >>> I'll leave it as an exercise to the reader to apply this to string literals. --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Tue Feb 13 04:15:30 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 12 Feb 2001 22:15:30 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <01c201c09595$7bc09be0$e46940d5@hagrid> References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> <01c201c09595$7bc09be0$e46940d5@hagrid> Message-ID: <14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "FL" == Fredrik Lundh 
                              
                              writes: FL> Jeremy wrote: >> I know that Quixote uses test cases in strings, but it's the >> thing I like the least about Quixote unittest FL> like whitespace indentation, it's done that way for a reason. Whitespace indentation is natural and makes code easier to read. Putting little snippets of Python code in string literals passed to exec has the opposite effect. doctest is a nice middle ground, because the code snippets are in a natural setting -- an interactive interpreter setting. >> I'm not sure how to achieve this or why you would want the test >> to continue. FL> same reason you want your compiler to report more than just the FL> first error -- so you can see patterns in the test script's FL> behaviour, so you can fix more than one bug at a time, or fix FL> the bugs in an order that suits you and not the framework, etc. Python's compiler only reports one syntax error for a source file, regardless of how many it finds <0.5 wink>. >> After the first exception, something is broken and needs to be >> fixed, regardless of whether subsequent lines of code work. FL> jeremy, that's the kind of comment I would have expected from a FL> manager, not from a programmer who has done lots of testing. I don't think there's any reason to be snide. The question is one of granularity: At what level of granularity should the test framework catch exceptions and continue? I'm satisfied with the unit of testing being a method. Jeremy From Jason.Tishler at dothill.com Tue Feb 13 15:51:40 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Tue, 13 Feb 2001 09:51:40 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: 
                              
                              ; from tim.one@home.com on Tue, Feb 13, 2001 at 12:34:23AM -0500 References: <20010212223638.A228@dothill.com> 
                              
                              Message-ID: <20010213095140.A306@dothill.com> Tim, On Tue, Feb 13, 2001 at 12:34:23AM -0500, Tim Peters wrote: > [Jason Tishler] > > I'm willing to help develop, test, etc. the Cygwin stuff. Just let me > > know how I can assist you. Guido said that you needed help with Cygwin and MacOSX, so I was just offering my help. I know that you have the "development" in good shape so let me know if I can help with testing Cygwin or other platforms. > Jason, doesn't the current CVS Python already do what you want? Yes. > I thought > that was the case, due to the HAVE_DIRENT_H #ifdef'ery Steven introduced. > If not, scream at me. My intent is to get rid of the HAVE_DIRENT_H #ifdef > *test*, but not the code therein, and add new versions of MatchFilename that > work for systems (like regular old Windows) that don't support opendir() > natively. I didn't think Cygwin needed that -- scream if that's wrong. You are correct -- Cygwin supports opendir() et al. > However, even if you are happy with that (& I won't hurt it), I am (and thanks). > sooner or > later you're going to try accessing a case-destroying network filesystem > from Cygwin, so I believe you need more code to honor PYTHONCASEOK too (it's > the only chance anyone has in the face of a case-destroying system). Is it possible to make the PYTHONCASEOK caveat orthogonal to the platform so it can be enabled to combat case-destroying filesystems for any platform? > Luckily, with a new child in the house, you have plenty of time to think > about this, since you won't be sleeping again for about 3 years anyway > 
                              
                              . Agreed -- this is not our first so we "know." I *have* been thinking about this issue and others 24 hours a day for the last two weeks. I'm just finding it difficult to actually *do* anything with one hand and no sleep... :,) Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From barry at digicool.com Tue Feb 13 16:00:19 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 13 Feb 2001 10:00:19 -0500 Subject: [Python-Dev] Unit testing (again) References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> <01c201c09595$7bc09be0$e46940d5@hagrid> <14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14985.19459.571737.979488@anthem.wooz.org> >>>>> "JH" == Jeremy Hylton 
                              
                              writes: JH> Whitespace indentation is natural and makes code easier to JH> read. Putting little snippets of Python code in string JH> literals passed to exec has the opposite effect. Especially because requiring the snippets to be in strings means editing them with a Python-aware editor becomes harder. JH> doctest is a nice middle ground, because the code snippets are JH> in a natural setting -- an interactive interpreter setting. And at least there, I can for the most part just cut-and-paste the output of my interpreter session into the docstrings. -Barry From fredrik at pythonware.com Tue Feb 13 17:32:00 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Tue, 13 Feb 2001 17:32:00 +0100 Subject: [Python-Dev] Unit testing (again) References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com><20010212171800.D3900@thrak.cnri.reston.va.us><200102122221.RAA11205@cj20424-a.reston1.va.home.com><14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net><20010212181010.A4267@thrak.cnri.reston.va.us><14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net><01c201c09595$7bc09be0$e46940d5@hagrid><14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> <14985.19459.571737.979488@anthem.wooz.org> Message-ID: <014801c095da$80577bc0$e46940d5@hagrid> barry wrote: > Especially because requiring the snippets to be in strings means > editing them with a Python-aware editor becomes harder. well, you don't have to put *all* your test code inside the test calls... try using them as asserts instead: do something do some calculations do some more calculations self.test_bool("result == 10") > And at least there, I can for the most part just cut-and-paste the > output of my interpreter session into the docstrings. cutting and pasting from the interpreter into the test assertion works just fine... Cheers /F From fredrik at pythonware.com Tue Feb 13 17:58:14 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Tue, 13 Feb 2001 17:58:14 +0100 Subject: [Python-Dev] Unit testing (again) References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com><20010212171800.D3900@thrak.cnri.reston.va.us><200102122221.RAA11205@cj20424-a.reston1.va.home.com><14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net><20010212181010.A4267@thrak.cnri.reston.va.us><14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net><01c201c09595$7bc09be0$e46940d5@hagrid> <14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <016401c095de$28dca100$e46940d5@hagrid> jeremy wrote: > FL> like whitespace indentation, it's done that way for a reason. > > Whitespace indentation is natural and makes code easier to read. > Putting little snippets of Python code in string literals passed to > exec has the opposite effect. Only if you're using large snippets. ...just like whitespace indentation makes things harder it you're mixing tabs and spaces, or prints a file with the wrong tab setting, or cuts and pastes code between editors with different tab settings. In both cases, the solution is simply "don't do that" > doctest is a nice middle ground, because the code snippets are in a > natural setting -- an interactive interpreter setting. They're still inside a string... > Python's compiler only reports one syntax error for a source file, > regardless of how many it finds <0.5 wink>. Sure, but is that because user testing has shown that Python programmers (unlike e.g. C programmers) prefer to see only one bug at a time, or because it's convenient to use exceptions also for syntax errors? Would a syntax-checking editor be better if it only showed one syntax error, even if it found them all? > > After the first exception, something is broken and needs to be > > fixed, regardless of whether subsequent lines of code work. > > FL> jeremy, that's the kind of comment I would have expected from a > FL> manager, not from a programmer who has done lots of testing. > > I don't think there's any reason to be snide. Well, I first wrote "taken out of context, that's the kind of comment" but then I noticed that it wasn't really taken out of context. And in full context, it still looks a bit arrogant: why would Andrew raise this issue if *he* didn't want more granularity? ::: But having looked everything over one more time, and having ported a small test suite to doctest.py, I'm now -0 on adding more test frame- works to 2.1. If it's good enough for tim... (and -1 if adding more frameworks means that I have to use them ;-). Cheers /F From jeremy at alum.mit.edu Tue Feb 13 06:29:35 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 13 Feb 2001 00:29:35 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <016401c095de$28dca100$e46940d5@hagrid> References: 
                              
                              <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> <01c201c09595$7bc09be0$e46940d5@hagrid> <14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> <016401c095de$28dca100$e46940d5@hagrid> Message-ID: <14984.50751.27663.64349@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "FL" == Fredrik Lundh 
                              
                              writes: >> > After the first exception, something is broken and needs to be >> > fixed, regardless of whether subsequent lines of code work. >> FL> jeremy, that's the kind of comment I would have expected from a FL> manager, not from a programmer who has done lots of testing. >> >> I don't think there's any reason to be snide. FL> Well, I first wrote "taken out of context, that's the kind of FL> comment" but then I noticed that it wasn't really taken out of FL> context. FL> And in full context, it still looks a bit arrogant: why would FL> Andrew raise this issue if *he* didn't want more granularity? I hope it's simple disagreement and not arrogance. I do not agree with him (or you) on a particular technical issue -- whether particular expressions should be stuffed into string literals in order to recover from exceptions they raise. There's a tradeoff between readability and granularity and I prefer readability. I am surprised that you are impugning my technical abilities (manager, not programmer) or calling me arrogant because I don't agree. I think I am should be entitled to my wrong opinion. Jeremy From akuchlin at cnri.reston.va.us Tue Feb 13 18:29:35 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Tue, 13 Feb 2001 12:29:35 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14984.50751.27663.64349@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Tue, Feb 13, 2001 at 12:29:35AM -0500 References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> <01c201c09595$7bc09be0$e46940d5@hagrid> <14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> <016401c095de$28dca100$e46940d5@hagrid> <14984.50751.27663.64349@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010213122935.G4334@thrak.cnri.reston.va.us> On Tue, Feb 13, 2001 at 12:29:35AM -0500, Jeremy Hylton wrote: >I hope it's simple disagreement and not arrogance. I do not agree I trust not. :) My primary concern is that the tests are quickly readable, because they're also a form of documentation (hopefully not the only one though). I have enough problems debugging the actual code without having to debug a test suite. Consider the example Chris posted, which features the snippet: def testGetItemFails(self): self.assertRaises(KeyError, self._getitemfail) def _getitemfail(self): return self.t[1] I don't think this form, requiring an additional small helper method, is any clearer than self.test_exc('self.t[1]', KeyError); two extra lines and the loss of locality. Put tests for 3 or 4 different exceptions into testGetItemFails and you'd have several helper functions to trace through. For simple value tests, this is less important; the difference between test_val( 'self.db.get_user("FOO")', None ) and test_val(None, self.db.get_user, "foo") is less. /F's observation that doctest seems suitable for his work is interesting and surprising; I'll spend some more time looking at it. --amk From tommy at ilm.com Tue Feb 13 18:59:32 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Tue, 13 Feb 2001 09:59:32 -0800 (PST) Subject: [Python-Dev] troubling math bug under IRIX 6.5 Message-ID: <14985.29880.710719.533126@mace.lucasdigital.com> Hey Folks, One of these days I'll figure that SOurceForge stuff out so I can submit a real bug report, but this one is freaky enough that I thought I'd just send it right out... from the latest CVS (as of 9:30am pacific) I run 'make test' and get: ... PYTHONPATH= ./python -tt ./Lib/test/regrtest.py -l make: *** [test] Bus error (core dumped) a quick search around shows that just importing regrtest.py seg faults, and further simply importing random.py seg faults (which regrtest.py does). it all boils down to this line in random.py NV_MAGICCONST = 4 * _exp(-0.5)/_sqrt(2.0) and the problem can be further reduced thusly: >>> import math >>> 4 * math.exp(-0.5) Bus error (core dumped) but it isn't the math.exp that's the problem, its multiplying the result times 4! (tommy at mace)/u0/tommy/pycvs/python/dist/src$ ./python Python 2.1a2 (#2, Feb 13 2001, 09:49:17) [C] on irix6 Type "copyright", "credits" or "license" for more information. >>> import math >>> math.exp(1) 2.7182818284590451 >>> math.exp(2) 7.3890560989306504 >>> math.exp(-1) 0.36787944117144233 >>> math.exp(-.5) 0.60653065971263342 >>> math.exp(-0.5) 0.60653065971263342 >>> 4 * math.exp(-0.5) Bus error (core dumped) is it just me? any guesses what might be the cause of this? From tim.one at home.com Tue Feb 13 20:47:54 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 14:47:54 -0500 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: <14985.29880.710719.533126@mace.lucasdigital.com> Message-ID: 
                              
                              [Flying Cougar Burnette] > ... > >>> 4 * math.exp(-0.5) > Bus error (core dumped) Now let's look at the important 
                              
                              part: > (tommy at mace)/u0/tommy/pycvs/python/dist/src$ ./python > Python 2.1a2 (#2, Feb 13 2001, 09:49:17) [C] on irix6 ^^^^^ The first thing to try on any SGI box is to recompile Python with optimization turned off. After that confirms it's the compiler's fault, we can try to figure out where it's screwing up. Do either of these blow up too? >>> 4 * 0.60653065971263342 >>> 4.0 * math.exp(-0.5) Reason for asking: "NV_MAGICCONST = 4 * _exp(-0.5)/_sqrt(2.0)" is the first time random.py does either of a floating multiply or an int-to-float conversion. > is it just me? Yup. So long as you use SGI software, it always will be 
                              
                              . and-i-say-that-as-an-sgi-shareholder-ly y'rs - tim From tommy at ilm.com Tue Feb 13 21:04:28 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Tue, 13 Feb 2001 12:04:28 -0800 (PST) Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: 
                              
                              References: <14985.29880.710719.533126@mace.lucasdigital.com> 
                              
                              Message-ID: <14985.37461.962243.777743@mace.lucasdigital.com> Tim Peters writes: | [Flying Cougar Burnette] | > ... | > >>> 4 * math.exp(-0.5) | > Bus error (core dumped) | | Now let's look at the important 
                              
                              part: | | > (tommy at mace)/u0/tommy/pycvs/python/dist/src$ ./python | > Python 2.1a2 (#2, Feb 13 2001, 09:49:17) [C] on irix6 | ^^^^^ figgered as much... | | The first thing to try on any SGI box is to recompile Python with | optimization turned off. After that confirms it's the compiler's fault, we | can try to figure out where it's screwing up. Do either of these blow up | too? | | >>> 4 * 0.60653065971263342 | >>> 4.0 * math.exp(-0.5) yup. | | > is it just me? | | Yup. So long as you use SGI software, it always will be 
                              
                              . | | and-i-say-that-as-an-sgi-shareholder-ly y'rs - tim one these days sgi... Pow! Right to the Moon! ;) Okay, I recompiled after blanking OPT= in Makefile and things now work. This is where I start swearing "But, this has never happened to me before!" and the kind, gentle response is "Don't worry, it happens to lots of guys..." ;) And the next step is... ? From tim.one at home.com Tue Feb 13 21:51:35 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 15:51:35 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <016401c095de$28dca100$e46940d5@hagrid> Message-ID: 
                              
                              [/F] > But having looked everything over one more time, and having ported > a small test suite to doctest.py, I'm now -0 on adding more test > frameworks to 2.1. If it's good enough for tim... I'm not sure that it is, but I have yet to make time to look at the others. It's no secret that I love doctest, and, indeed, in 20+ years of industry pain, it's the only testing approach I didn't drop ASAP. I still use it for all my stuff, and very happily. But! I don't do anything with the web or GUIs etc -- I'm an algorithms guy. Most of the stuff I work with has clearly defined input->output relationships, and capturing an interactive session is simply perfect both for documenting and testing such stuff. It's also the case that I weight the "doc" part of "doctest" more heavily than the "test" part, and when Peter or Guido say that, e.g., the reliance on exact output match is "a problem", I couldn't disagree more strongly. It's obvious to Guido that dict output may come in any order, but a doc *reader* in a hurry is at best uneasy when documented output doesn't match actual output exactly. That's not something I'll yield on. [Andrew] > def testGetItemFails(self): > self.assertRaises(KeyError, self._getitemfail) > > def _getitemfail(self): > return self.t[1] > > [vs] > > self.test_exc('self.t[1]', KeyError) My brain doesn't grasp either of those at first glance. But everyone who has used Python a week grasps this: class C: def __getitem__(self, i): """Return the i'th item. i==1 raises KeyError. For example, >>> c = C() >>> c[0] 0 >>> c[1] Traceback (most recent call last): File "x.py", line 20, in ? c[1] File "x.py", line 14, in __getitem__ raise KeyError("bad i: " + `i`) KeyError: bad i: 1 >>> c[-1] -1 """ if i != 1: return i else: raise KeyError("bad i: " + `i`) Cute: Python changed the first line of its traceback output (used to say "Traceback (innermost last):"), and current doctest wasn't expecting that. For *doc* purposes, it's important that the examples match what Python actually does, so that's a bug in doctest. From tim.one at home.com Tue Feb 13 22:04:29 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 16:04:29 -0500 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: <14985.37461.962243.777743@mace.lucasdigital.com> Message-ID: 
                              
                              [Tommy turns off optimization, and all is well] >> Do either of these blow up too? >> >> >>> 4 * 0.60653065971263342 >> >>> 4.0 * math.exp(-0.5) > yup. OK. Does the first one blow up? Does the second one blow up? Or do both blow up? Fourth question: does >> 4.0 * 0.60653065971263342 blow up? > ... > And the next step is... ? Stop making me pull your teeth 
                              
                              . I'm trying to narrow down where it's screwing up. At worst, then, you can disable optimization only for that particular file, and create a tiny bug case to send off to SGI World Headquarters so they fix this someday. At best, perhaps a tiny bit of code rearrangement will unstick your compiler (I'm good at guessing what might work in that respect, but need to narrow it down to a single function within Python first), and I can check that in for 2.1. From fredrik at effbot.org Tue Feb 13 22:33:20 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 13 Feb 2001 22:33:20 +0100 Subject: [Python-Dev] Unit testing (again) References: 
                              
                              Message-ID: <003d01c09604$a0f15520$e46940d5@hagrid> > Cute: Python changed the first line of its traceback output (used to say > "Traceback (innermost last):"), and current doctest wasn't expecting that. which reminds me... are there any chance of getting a doctest that can survives its own test suite under 1.5.2, 2.0, and 2.1? the latest version blows up under 1.5.2 and 2.0: ***************************************************************** Failure in example: 1/0 from line #155 of doctest Expected: ZeroDivisionError: integer division or modulo by zero Got: ZeroDivisionError: integer division or modulo 1 items had failures: 1 of 8 in doctest ***Test Failed*** 1 failures. Cheers /F From mal at lemburg.com Tue Feb 13 22:33:21 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 13 Feb 2001 22:33:21 +0100 Subject: [Python-Dev] Unit testing (again) References: 
                              
                              <003d01c09604$a0f15520$e46940d5@hagrid> Message-ID: <3A89A821.6EFC6AC9@lemburg.com> Fredrik Lundh wrote: > > > Cute: Python changed the first line of its traceback output (used to say > > "Traceback (innermost last):"), and current doctest wasn't expecting that. > > which reminds me... are there any chance of getting a doctest > that can survives its own test suite under 1.5.2, 2.0, and 2.1? > > the latest version blows up under 1.5.2 and 2.0: > > ***************************************************************** > Failure in example: 1/0 > from line #155 of doctest > Expected: ZeroDivisionError: integer division or modulo by zero > Got: ZeroDivisionError: integer division or modulo > 1 items had failures: > 1 of 8 in doctest > ***Test Failed*** 1 failures. Since exception message are not defined anywhere I'd suggest to simply ignore them in the output. About the traceback output format: how about adding some re support instead of using string.find() ?! -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From michel at digicool.com Tue Feb 13 23:39:52 2001 From: michel at digicool.com (Michel Pelletier) Date: Tue, 13 Feb 2001 14:39:52 -0800 (PST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010213122935.G4334@thrak.cnri.reston.va.us> Message-ID: 
                              
                              On Tue, 13 Feb 2001, Andrew Kuchling wrote: > Consider the example Chris posted, which features the snippet: > > def testGetItemFails(self): > self.assertRaises(KeyError, self._getitemfail) > > def _getitemfail(self): > return self.t[1] > > I don't think this form, requiring an additional small helper method, > is any clearer than self.test_exc('self.t[1]', KeyError); two extra > lines and the loss of locality. Put tests for 3 or 4 different > exceptions into testGetItemFails and you'd have several helper > functions to trace through. I'm not sure what the purpose of using a unit test to test a different unit in the same suite is. I've never used assertRaises in this way, and the small helper method seems just to illustrate your point, not an often used feature of asserting an Exception condition. More often the method you are checking for an exception comes from the thing you are testing, not the test. Maybe you have different usage patterns than I... -Michel From tim.one at home.com Tue Feb 13 22:39:08 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 16:39:08 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <003d01c09604$a0f15520$e46940d5@hagrid> Message-ID: 
                              
                              [/F] > which reminds me... are there any chance of getting a doctest > that can survives its own test suite under 1.5.2, 2.0, and 2.1? > > the latest version blows up under 1.5.2 and 2.0: > > ***************************************************************** > Failure in example: 1/0 > from line #155 of doctest > Expected: ZeroDivisionError: integer division or modulo by zero > Got: ZeroDivisionError: integer division or modulo > 1 items had failures: > 1 of 8 in doctest > ***Test Failed*** 1 failures. Not to my mind. doctest is intentionally picky about exact matches, for reasons explained earlier. If the docs for a thing say "integer division or modulo by zero" is expected, but running it says something else, the docs are wrong and doctest's primary *purpose* is to point that out loudly. I could change the exception example to something where Python didn't gratuitously change what it prints, though 
                              
                              . OK, I'll do that. From tim.one at home.com Tue Feb 13 22:42:19 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 16:42:19 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <3A89A821.6EFC6AC9@lemburg.com> Message-ID: 
                              
                              [MAL] > Since exception message are not defined anywhere I'd suggest > to simply ignore them in the output. Virtually nothing about Python's output is clearly defined, and for doc purposes I want to capture what Python actually does. > About the traceback output format: how about adding some > re support instead of using string.find() ?! Why? I never use regexps where simple string matches work, and neither should you 
                              
                              . From guido at digicool.com Tue Feb 13 22:45:56 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 13 Feb 2001 16:45:56 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: Your message of "Tue, 13 Feb 2001 16:39:08 EST." 
                              
                              References: 
                              
                              Message-ID: <200102132145.QAA18076@cj20424-a.reston1.va.home.com> > Not to my mind. doctest is intentionally picky about exact matches, for > reasons explained earlier. If the docs for a thing say "integer division or > modulo by zero" is expected, but running it says something else, the docs > are wrong and doctest's primary *purpose* is to point that out loudly. Of course, this is means that *if* you use doctest, all authoritative docs should be in the docstring, and not elsewhere. Which brings us back to the eternal question of how to indicate mark-up in docstrings. Is everything connected to everything? --Guido van Rossum (home page: http://www.python.org/~guido/) From michel at digicool.com Tue Feb 13 23:54:58 2001 From: michel at digicool.com (Michel Pelletier) Date: Tue, 13 Feb 2001 14:54:58 -0800 (PST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <002301c095a0$4fe5cc60$e46940d5@hagrid> Message-ID: 
                              
                              On Tue, 13 Feb 2001, Fredrik Lundh wrote: > tim wrote: > > I will immodestly claim that if doctest is sufficient for your testing > > purposes, you're never going to find anything easier or faster or more > > natural to use > > you know, just having taken another look at doctest > and the unit test options, I'm tempted to agree. I also agree that doctest is the bee's knees, but I don't think it is quite as useful for us as PyUnit (for other people, I'm sure it's very useful). One of the goals of our interface work is to associate unit tests with interfaces. I don't see how doctest can work well with that. I probably need to look at it more, but one of our end goals is to walk up to a component, push a button, and have that components interfaces test the component while the system is live. I immagine this involving a bit of external framework at the interface level that would be pretty easy with PyUnit, I've only seen one example of doctest and it looks like you run it against an imported module. I don't see how this helps us with our (DC's) definition of components. A personal issue for me is that it overloads the docstring, no biggy, but it's just a personal nit I don't particularly like about doctest. Another issue is documentation. I don't know how much documentation doctest has, but PyUnit's documentation is *superb* and there are no suprises, which is absolutely +1. Quixote's documentation seems very thin (please correct me if I'm wrong). PyUnit's documentation goes beyond just explaning the software into explaining common patterns and unit testing philosophies. -Michel From tim.one at home.com Tue Feb 13 23:13:24 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 17:13:24 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Michel Pelletier] > ... > A personal issue for me is that it overloads the docstring, no > biggy, but it's just a personal nit I don't particularly like about > doctest. No. The docstring remains documentation. But documentation without examples generally sucks, due to the limitations of English in being precise. A sharp example can be worth 1,000 words. doctest is being used as *intended* to the extent that the embedded examples are helpful for documentation purposes. doctest then guarantees the examples continue to work exactly as advertised over time (and they don't! examples *always* get out of date, but without (something like) doctest they never get repaired). As I suggested at the start, read the docstrings for difflib.py: the examples are an integral part of the docs, and you shouldn't get any sense that they're there "just for testing" (if you do, the examples are poorly chosen, or poorly motivated in the surrounding commentary). Beyond that, doctest will also execute any code it finds in the module.__test__ dict, which maps arbitrary names to arbitrary strings. Anyone using doctest primarily as a testing framework should stuff their test strings into __test__ and leave the docstrings alone. > Another issue is documentation. I don't know how much documentation > doctest has, Look at its docstrings -- they not only explain it in detail, but contain examples of use that doctest can check 
                              
                              . From fredrik at effbot.org Tue Feb 13 23:22:50 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 13 Feb 2001 23:22:50 +0100 Subject: [Python-Dev] Unit testing (again) References: 
                              
                              Message-ID: <008101c0960b$818e09b0$e46940d5@hagrid> michel wrote: > One of the goals of our interface work is to associate unit tests with > interfaces. I don't see how doctest can work well with that. I probably > need to look at it more, but one of our end goals is to walk up to a > component, push a button, and have that components interfaces test the > component while the system is live. My revised approach to unit testing is to use doctest to test the test harness, not the module itself. To handle your case, design the test to access the component via a module global, let the "onclick" code set up that global, and run the test script under doctest. (I did that earlier today, and it sure worked just fine) > Another issue is documentation. I don't know how much documentation > doctest has, but PyUnit's documentation is *superb* and there are no > suprises, which is absolutely +1. No surprises? I don't know -- my brain kind of switched off when I came to the "passing method names as strings to the constructor" part. Now, how Pythonic is that on a scale? On the other hand, I also suffer massive confusion whenever I try to read Zope docs, so it's probably just different mind- sets ;-) Cheers /F From tommy at ilm.com Tue Feb 13 23:25:13 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Tue, 13 Feb 2001 14:25:13 -0800 (PST) Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: 
                              
                              References: <14985.37461.962243.777743@mace.lucasdigital.com> 
                              
                              Message-ID: <14985.46047.226447.573927@mace.lucasdigital.com> sorry- BOTH blew up until I turned off optimization. now neither does. shall I turn opts back on and try a few more cases? Tim Peters writes: | [Tommy turns off optimization, and all is well] | | >> Do either of these blow up too? | >> | >> >>> 4 * 0.60653065971263342 | >> >>> 4.0 * math.exp(-0.5) | | > yup. | | OK. Does the first one blow up? Does the second one blow up? Or do both | blow up? | | Fourth question: does | | >> 4.0 * 0.60653065971263342 | | blow up? | | > ... | > And the next step is... ? | | Stop making me pull your teeth 
                              
                              . I'm trying to narrow down where it's | screwing up. At worst, then, you can disable optimization only for that | particular file, and create a tiny bug case to send off to SGI World | Headquarters so they fix this someday. At best, perhaps a tiny bit of code | rearrangement will unstick your compiler (I'm good at guessing what might | work in that respect, but need to narrow it down to a single function within | Python first), and I can check that in for 2.1. From sdm7g at virginia.edu Tue Feb 13 23:35:24 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Tue, 13 Feb 2001 17:35:24 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: <200102122223.RAA11224@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              On Mon, 12 Feb 2001, Guido van Rossum wrote: > Tim has convinced me that the proper rules are simple: > > - If PYTHONCASEOK is set, use the first file found with a > case-insensitive match. > > - If PYTHONCASEOK is not set, and the file system is case-preserving, > ignore (rather than bail out at) files that don't have the proper > case. > > Tim is in charge of cleaning up the code, but he'll need help for the > Cygwin and MacOSX parts. > Thanks Tim (for convincing Guido). Now I don't have to post (and you don't have to read!) my 2K word essay on why Guido's old rules were inconsistent and might have caused TEOTWAWKI if fed into the master computer. Just let me know if you need me to test anything on macosx. -- Steve M. From mal at lemburg.com Tue Feb 13 23:37:13 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 13 Feb 2001 23:37:13 +0100 Subject: [Python-Dev] Unit testing (again) References: 
                              
                              Message-ID: <3A89B719.9CDB68B@lemburg.com> Tim Peters wrote: > > [MAL] > > Since exception message are not defined anywhere I'd suggest > > to simply ignore them in the output. > > Virtually nothing about Python's output is clearly defined, and for doc > purposes I want to capture what Python actually does. But what it does write to the console changes with every release (e.g. just take the repr() changes for strings with non-ASCII data)... this simply breaks you test suite every time Writing Python programs which work on Python 1.5-2.1 which at the same time pass the doctest unit tests becomes impossible. The regression suite (and most other Python software) catches exceptions based on the exception class -- why isn't this enough for your doctest.py checks ? nit-pickling-ly, -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From jeremy at alum.mit.edu Tue Feb 13 11:47:01 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 13 Feb 2001 05:47:01 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <008101c0960b$818e09b0$e46940d5@hagrid> References: 
                              
                              <008101c0960b$818e09b0$e46940d5@hagrid> Message-ID: <14985.4261.562851.935532@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "FL" == Fredrik Lundh 
                              
                              writes: >> Another issue is documentation. I don't know how much >> documentation doctest has, but PyUnit's documentation is *superb* >> and there are no suprises, which is absolutely +1. FL> No surprises? I don't know -- my brain kind of switched off FL> when I came to the "passing method names as strings to the FL> constructor" part. Now, how Pythonic is that on a scale? I think this is one of the issues where there is widespread argeement that a feature is needed. The constructor should assume, in the absence of some other instruction, that any method name that starts with 'test' should be considered a test method. That's about as Pythonic as it gets. Jeremy From guido at digicool.com Wed Feb 14 00:13:48 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 13 Feb 2001 18:13:48 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: Your message of "Tue, 13 Feb 2001 17:13:24 EST." 
                              
                              References: 
                              
                              Message-ID: <200102132313.SAA18504@cj20424-a.reston1.va.home.com> > No. The docstring remains documentation. But documentation without > examples generally sucks, due to the limitations of English in being > precise. A sharp example can be worth 1,000 words. doctest is being used > as *intended* to the extent that the embedded examples are helpful for > documentation purposes. doctest then guarantees the examples continue to > work exactly as advertised over time (and they don't! examples *always* get > out of date, but without (something like) doctest they never get repaired). You're lucky that doctest doesn't return dictionaries! For functions that return dictionaries, it's much more natural *for documentation purposes* to write >>> book() {'Fred': 'mom', 'Ron': 'Snape'} than the necessary work-around. You may deny that's a problem, but once we've explained dictionaries to our users, we can expect them to understand that if they see instead >>> book() {'Ron': 'Snape', 'Fred': 'mom'} they will understand that that's the same thing. Writing it this way is easier to read than >>> book() == {'Ron': 'Snape', 'Fred': 'mom'} 1 I always have to look twice when I see something like that. > As I suggested at the start, read the docstrings for difflib.py: the > examples are an integral part of the docs, and you shouldn't get any sense > that they're there "just for testing" (if you do, the examples are poorly > chosen, or poorly motivated in the surrounding commentary). They are also more voluminous than I'd like the docs for difflib to be... --Guido van Rossum (home page: http://www.python.org/~guido/) From ping at lfw.org Wed Feb 14 00:11:10 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 13 Feb 2001 15:11:10 -0800 (PST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: 
                              
                              Message-ID: 
                              
                              In support of the argument that bad literals should raise ValueError (or a derived exception) rather than SyntaxError, Guido once said: > "Problems with literal interpretations > traditionally raise 'runtime' exceptions rather than syntax errors." This is currently true of overflowing integers and string literals, and hence it has also been so implemented for Unicode literals. But i want to propose a break with tradition, because some more recent thinking on this has led me to become firmly convinced that SyntaxError is really the right thing to do in all of these cases. The strongest reason is that a long file with a typo in a string literal somewhere in hundreds of lines of code generates only ValueError: invalid \x escape with no indication to where the error is -- not even which file! I realize this could be hacked upon and fixed, but i think it points to a general inconsistency that ought to be considered and addressed. 1. SyntaxErrors are for compile-time errors. A problem with a literal happens before the program starts running, and it is useful for me, as the programmer, to know whether the error occurred because of some computational process, possibly depending on inputs, or whether it's a permanent mistake that's literally in my source code. In other words, will a debugger do me any good? 2. SyntaxErrors pinpoint the exact location of the problem. In principle, an error is a SyntaxError if and only if you can point to an exact character position as being the cause of the problem. 3. A ValueError means "i got a value that wasn't allowed or expected here". That is not at all what is happening. There is *no* defined value at all. It's not that there was a value and it was wrong -- the value was never even brought into existence. 4. The current implementation of ValueErrors is very unhelpful about what to do about an invalid literal, as explained in the example above. A SyntaxError would be much more useful. I hope you will agree with me that solving only #4 by changing ValueErrors so they behave a little more like SyntaxErrors in certain particular situations isn't the best solution. Also, switching to SyntaxError is likely to break very few things. You can't depend on catching a SyntaxError, precisely because it's a compile-time error. No one could possibly be using "except ValueError" to try to catch invalid literals in their code; that usage, just like "except SyntaxError:", makes sense only when someone is using "eval" or "exec" to interpret code that was generated or read from input. In fact, i bet switching to SyntaxError would actually make some code of the form "try: eval ... except SyntaxError" work better, since the single except clause would catch all possible compilation problems with the input to eval. -- ?!ng Happiness comes more from loving than being loved; and often when our affection seems wounded it is is only our vanity bleeding. To love, and to be hurt often, and to love again--this is the brave and happy life. -- J. E. Buchrose From guido at digicool.com Wed Feb 14 00:32:15 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 13 Feb 2001 18:32:15 -0500 Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: Your message of "Tue, 13 Feb 2001 15:11:10 PST." 
                              
                              References: 
                              
                              Message-ID: <200102132332.SAA18696@cj20424-a.reston1.va.home.com> > In support of the argument that bad literals should raise ValueError > (or a derived exception) rather than SyntaxError, Guido once said: > > > "Problems with literal interpretations > > traditionally raise 'runtime' exceptions rather than syntax errors." > > This is currently true of overflowing integers and string literals, > and hence it has also been so implemented for Unicode literals. > > But i want to propose a break with tradition, because some more recent > thinking on this has led me to become firmly convinced that SyntaxError > is really the right thing to do in all of these cases. > > The strongest reason is that a long file with a typo in a string > literal somewhere in hundreds of lines of code generates only > > ValueError: invalid \x escape > > with no indication to where the error is -- not even which file! > I realize this could be hacked upon and fixed, but i think it points > to a general inconsistency that ought to be considered and addressed. > > 1. SyntaxErrors are for compile-time errors. A problem with > a literal happens before the program starts running, and > it is useful for me, as the programmer, to know whether > the error occurred because of some computational process, > possibly depending on inputs, or whether it's a permanent > mistake that's literally in my source code. In other words, > will a debugger do me any good? > > 2. SyntaxErrors pinpoint the exact location of the problem. > In principle, an error is a SyntaxError if and only if you > can point to an exact character position as being the cause > of the problem. > > 3. A ValueError means "i got a value that wasn't allowed or > expected here". That is not at all what is happening. > There is *no* defined value at all. It's not that there > was a value and it was wrong -- the value was never even > brought into existence. > > 4. The current implementation of ValueErrors is very unhelpful > about what to do about an invalid literal, as explained > in the example above. A SyntaxError would be much more useful. > > I hope you will agree with me that solving only #4 by changing > ValueErrors so they behave a little more like SyntaxErrors in > certain particular situations isn't the best solution. > > Also, switching to SyntaxError is likely to break very few things. > You can't depend on catching a SyntaxError, precisely because it's > a compile-time error. No one could possibly be using "except ValueError" > to try to catch invalid literals in their code; that usage, just like > "except SyntaxError:", makes sense only when someone is using "eval" or > "exec" to interpret code that was generated or read from input. > > In fact, i bet switching to SyntaxError would actually make some code > of the form "try: eval ... except SyntaxError" work better, since the > single except clause would catch all possible compilation problems > with the input to eval. All good points, except that I still find it hard to flag overflow errors as syntax errors, especially since overflow is platform defined. On one platform, 1000000000000 is fine; on another it's a SyntaxError. That could be confusing. But you're absolutely right about string literals, and maybe it's OK if 1000000000000000000000000000000000000000000000000000000000000000000 is flagged as a syntax error. (After all it's missing a trailing 'L'.) Another solution (borrowing from C): automatically promote int literals to long if they can't be evaluated as ints. --Guido van Rossum (home page: http://www.python.org/~guido/) From greg at cosc.canterbury.ac.nz Wed Feb 14 00:43:16 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Wed, 14 Feb 2001 12:43:16 +1300 (NZDT) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <200102132332.SAA18696@cj20424-a.reston1.va.home.com> Message-ID: <200102132343.MAA05559@s454.cosc.canterbury.ac.nz> Guido: > I still find it hard to flag overflow > errors as syntax errors, especially since overflow is platform > defined. How about introducing the following hierarchy: CompileTimeError SyntaxError LiteralRangeError LiteralRangeError could inherit from ValueError as well if you want. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Wed Feb 14 00:54:43 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 18:54:43 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <3A89B719.9CDB68B@lemburg.com> Message-ID: 
                              
                              [MAL] > Since exception message are not defined anywhere I'd suggest > to simply ignore them in the output. [Tim] > Virtually nothing about Python's output is clearly defined, and for doc > purposes I want to capture what Python actually does. [MAL] > But what it does write to the console changes with every > release (e.g. just take the repr() changes for strings with > non-ASCII data)... So now you don't want to test exception messages *or* non-exceptional output either. That's fine, but you're never going to like doctest if so. > this simply breaks you test suite every time I think you're missing the point: it breaks your *docs*, if they contain any examples that rely on such stuff. doctest then very helpfully points out that your docs-- no matter how good they were before --now suck, because they're now *wrong*. It's not interested in assigning blame for that, it's enough to point out that they're now broken (else they'll never get fixed!). > Writing Python programs which work on Python 1.5-2.1 which at > the same time pass the doctest unit tests becomes impossible. Not true. You may need to rewrite your examples, though, so that your *docs* are accurate under all the releases you care about. I don't care if that drives you mad, so long as it prevents you from screwing your users with inaccurate docs. > The regression suite (and most other Python software) catches > exceptions based on the exception class -- why isn't this enough > for your doctest.py checks ? Because doctest is primarily interested in ensuring that non-exceptional cases continue to work exactly as advertised. Checking that, e.g., >>> fac(5) 120 still works is at least 10x easier to live with than writing crap like if fac(5) != 120: raise TestFailed("Something about fac failed but it's too " "painful to build up some string here " "explaining exactly what -- try single-" "stepping through the test by hand until " "this raise triggers.") That's regrtest.py-style testing, and if you think that's pleasant to work with you must never have seen a std test break <0.7 wink>. When a doctest'ed module breaks, the doctest output pinpoints the failure precisely, without any work on your part beyond simply capturing an interactive session that shows the results you expected. > nit-pickling-ly, Na, you're trying to force doctest into a mold it was designed to get as far away from as possible. Use it for its intended purpose, then gripe. Right now you're complaining that the elephant's eyes are the wrong color while missing that it's actually a leopard 
                              
                              . From thomas at xs4all.net Wed Feb 14 00:57:16 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 14 Feb 2001 00:57:16 +0100 Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: 
                              
                              ; from ping@lfw.org on Tue, Feb 13, 2001 at 03:11:10PM -0800 References: 
                              
                              
                              Message-ID: <20010214005716.D4924@xs4all.nl> On Tue, Feb 13, 2001 at 03:11:10PM -0800, Ka-Ping Yee wrote: > The strongest reason is that a long file with a typo in a string > literal somewhere in hundreds of lines of code generates only > ValueError: invalid \x escape > with no indication to where the error is -- not even which file! > I realize this could be hacked upon and fixed, but i think it points > to a general inconsistency that ought to be considered and addressed. This has nothing to do with the error being a ValueError, but with some (compile-time) errors not being promoted to 'full' errors. See https://sourceforge.net/patch/?func=detailpatch&patch_id=101782&group_id=5470 The same issue came up when importing modules that did 'from foo import *' in a function scope. > 1. SyntaxErrors are for compile-time errors. A problem with > a literal happens before the program starts running, and > it is useful for me, as the programmer, to know whether > the error occurred because of some computational process, > possibly depending on inputs, or whether it's a permanent > mistake that's literally in my source code. In other words, > will a debugger do me any good? Agreed. That could possibly be solved by a better description of the valueerrors in question, though. (The 'invalid \x escape' message seems pretty obvious a compiletime-error to me, but others might not.) > 2. SyntaxErrors pinpoint the exact location of the problem. > In principle, an error is a SyntaxError if and only if you > can point to an exact character position as being the cause > of the problem. See above. > 3. A ValueError means "i got a value that wasn't allowed or > expected here". That is not at all what is happening. > There is *no* defined value at all. It's not that there > was a value and it was wrong -- the value was never even > brought into existence. Not quite true. It wasn't *compiled*, but it's a literal, so it does exist. The problem is not the value of a compiled \x escape, but the value after the \x. > 4. The current implementation of ValueErrors is very unhelpful > about what to do about an invalid literal, as explained > in the example above. A SyntaxError would be much more useful. See #1 :) > I hope you will agree with me that solving only #4 by changing > ValueErrors so they behave a little more like SyntaxErrors in > certain particular situations isn't the best solution. I don't, really. The name 'ValueError' is exactly right: what is wrong (in the \x escape example) is the *value* of something (of the \x escape in question.) If a syntax error was raised, I would think something was wrong with the syntax. But the \x is placed in the right spot, inside a string literal. The string literal itself is placed right. Why would it be a syntax error ? > In fact, i bet switching to SyntaxError would actually make some code > of the form "try: eval ... except SyntaxError" work better, since the > single except clause would catch all possible compilation problems > with the input to eval. I'd say you want a 'CompilerError' superclass instead. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Wed Feb 14 01:13:47 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 19:13:47 -0500 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: <14985.46047.226447.573927@mace.lucasdigital.com> Message-ID: 
                              
                              [Tommy] > sorry- BOTH blew up until I turned off optimization. OK, that rules out int->float conversion as the cause (one of the examples didn't do any conversions). That multiplication by 4 triggered it rules out that any IEEE exceptions are to blame either (mult by 4 doesn't even trigger the IEEE "inexact" exception). > now neither does. shall I turn opts back on and try a few more > cases? Yes, please, one more: 4.0 * 3.1 Or, if that works, go back to the failing 4.0 * math.exp(-0.5) In any failing case, can you jump into a debubber and get a stack trace? Do you happen to have WANT_SIGFPE_HANDLER #define'd when you compile Python on this platform? If so, it complicates the code a lot. I wonder about that because you got a "bus error", and when WANT_SIGFPE_HANDLER is #defined we get a whole pile of ugly setjmp/longjmp code that doesn't show up on my box. Another tack, as a temporary workaround: try disabling optimization only for Objects/floatobject.c. That will probably fix the problem, and if so that's enough of a workaround to get you unstuck while pursuing these other irritations. From cgw at alum.mit.edu Wed Feb 14 01:34:11 2001 From: cgw at alum.mit.edu (Charles G Waldman) Date: Tue, 13 Feb 2001 18:34:11 -0600 (CST) Subject: [Python-Dev] failure: 2.1a2 on HP-UX with native compiler Message-ID: <14985.53891.987696.686572@sirius.net.home> Allow me to start off with a personal note. I am no longer @fnal.gov, I have a new job which is very interesting and challenging but not particularly Python-related - [my new employer is geodesic.com] I will have much less time to devote to Python from now on, but I'm still interested, and since I have access to a lot of unusual hardware at my new job (Linux/360 anybody?) I am going to try to download and test alphas and betas as much as time permits. Along these lines, I tried building the 2.1a2 version on an SMP HP box: otto:Python-2.1a2$ uname -a HP-UX otto B.11.00 U 9000/800 137901547 unlimited-user license this box has both gcc and the native compiler installed, but not g++. I tried to configure with the command: otto:Python-2.1a2$ ./configure --without-gcc creating cache ./config.cache checking MACHDEP... hp-uxB checking for --without-gcc... yes checking for --with-cxx=
                              
                              ... no checking for c++... no checking for g++... no checking for gcc... gcc checking whether the C++ compiler (gcc ) works... no configure: error: installation or configuration problem: C++ compiler cannot create executables. Seems like the "--without-gcc" flag is being completely ignored! I'll try to track this down as time permits. From tim.one at home.com Wed Feb 14 02:24:00 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 20:24:00 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102132313.SAA18504@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Guido] > You're lucky that doctest doesn't return dictionaries! For functions > that return dictionaries, it's much more natural *for documentation > purposes* to write > > >>> book() > {'Fred': 'mom', 'Ron': 'Snape'} > > than the necessary work-around. You may deny that's a problem, but > once we've explained dictionaries to our users, we can expect them to > understand that if they see instead > > >>> book() > {'Ron': 'Snape', 'Fred': 'mom'} > > they will understand that that's the same thing. Writing it this way > is easier to read than > > >>> book() == {'Ron': 'Snape', 'Fred': 'mom'} > 1 > > I always have to look twice when I see something like that. >>> sortdict(book()) {'Fred': 'mom', 'Ron': 'Snape'} Explicit is better etc. If I have a module that's going to be showing a lot of dict output, I'll write a little "sortdict" function at the top of the docs and explain why it's there. It's clear from c.l.py postings over the years that lots of people *don't* grasp that dicts are "unordered". Introducing a sortdict() function serves a useful pedagogical purpose for them too. More subtle than dicts for most people is examples showing floating-point output. This isn't reliable across platforms (and, e.g., it's no coincidence that most of the .ratio() etc examples in difflib.py are contrived to return values exactly representable in binary floating-point; but simple fractions like 3/4 are also easiest for people to visualize, so that also makes for good examples). > They [difflib.py docstring docs] are also more voluminous than I'd > like the docs for difflib to be... Not me -- there's nothing in them that I as a potential user don't need to know. But then I think the Library docs are too terse in general. Indeed, Fredrick makes part of his living selling a 300-page book supplying desperately needed Library examples <0.5 wink>. WRT difflib.py, it's OK by me if Fred throws out the examples when LaTeXing the module docstring, because a user can still get the info *from* the docstrings. For that matter, he may as well throw out everything except the first line or two of each method description, if you want bare-bones minimal docs for the manual. no-denying-that-examples-take-space-but-what's-painful-to-include- in-the-latex-docs-is-trivial-to-maintain-in-the-code-ly y'rs - tim From tommy at ilm.com Wed Feb 14 02:57:03 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Tue, 13 Feb 2001 17:57:03 -0800 (PST) Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: 
                              
                              References: <14985.46047.226447.573927@mace.lucasdigital.com> 
                              
                              Message-ID: <14985.58539.114838.36680@mace.lucasdigital.com> Tim Peters writes: | | > now neither does. shall I turn opts back on and try a few more | > cases? | | Yes, please, one more: | | 4.0 * 3.1 | | Or, if that works, go back to the failing | | 4.0 * math.exp(-0.5) both of these work, but changing the 4.0 to an integer 4 produces the bus error. so it is definitely a conversion to double/float thats the problem. | | In any failing case, can you jump into a debubber and get a stack trace? sure. I've included an entire dbx session at the end of this mail. | | Do you happen to have | | WANT_SIGFPE_HANDLER | | #define'd when you compile Python on this platform? If so, it complicates | the code a lot. I wonder about that because you got a "bus error", and when | WANT_SIGFPE_HANDLER is #defined we get a whole pile of ugly setjmp/longjmp | code that doesn't show up on my box. a peek at config.h shows the WANT_SIGFPE_HANDLER define commented out. should I turn it on and see what happens? | | Another tack, as a temporary workaround: try disabling optimization only | for Objects/floatobject.c. That will probably fix the problem, and if so | that's enough of a workaround to get you unstuck while pursuing these other | irritations. this one works just fine. workarounds aren't a problem for me right now since I'm in no hurry to get this version in use here. I'm just trying to help debug this version for irix users in general. ------------%< snip %<----------------------%< snip %<------------ (tommy at mace)/u0/tommy/pycvs/python/dist/src$ dbx python dbx version 7.3 65959_Jul11 patchSG0003841 Jul 11 2000 02:29:30 Executable /usr/u0/tommy/pycvs/python/dist/src/python (dbx) run Process 563746 (python) started Python 2.1a2 (#6, Feb 13 2001, 17:43:32) [C] on irix6 Type "copyright", "credits" or "license" for more information. >>> 3 * 4.0 12.0 >>> import math >>> 4 * math.exp(-.5) Process 563746 (python) stopped on signal SIGBUS: Bus error (default) at [float_mul:383 +0x4,0x1004c158] 383 CONVERT_TO_DOUBLE(v, a); (dbx) l >* 383 CONVERT_TO_DOUBLE(v, a); 384 CONVERT_TO_DOUBLE(w, b); 385 PyFPE_START_PROTECT("multiply", return 0) 386 a = a * b; 387 PyFPE_END_PROTECT(a) 388 return PyFloat_FromDouble(a); 389 } 390 391 static PyObject * 392 float_div(PyObject *v, PyObject *w) (dbx) t > 0 float_mul(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/floatobject.c":383, 0x1004c158] 1 binary_op1(0x100b69fc, 0x10116788, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":337, 0x1003ac5c] 2 binary_op(0x100b69fc, 0x10116788, 0x8, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":373, 0x1003ae70] 3 PyNumber_Multiply(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":544, 0x1003b5a4] 4 eval_code2(0x1012c688, 0x0, 0xffffffec, 0x0, 0x0, 0x0, 0x0, 0x0) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":896, 0x10034a54] 5 PyEval_EvalCode(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":336, 0x10031768] 6 run_node(0x100f88c0, 0x10116788, 0x0, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":931, 0x10040444] 7 PyRun_InteractiveOne(0x0, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":540, 0x1003f1f0] 8 PyRun_InteractiveLoop(0xfb4a398, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":486, 0x1003ef84] 9 PyRun_AnyFileEx(0xfb4a398, 0x100b1878, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":461, 0x1003eeac] 10 Py_Main(0x1, 0x0, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/main.c":292, 0x1000bba4] 11 main(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/python.c":10, 0x1000b7bc] More (n if no)?y 12 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x1000b558] (dbx) From fdrake at acm.org Wed Feb 14 04:10:20 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Tue, 13 Feb 2001 22:10:20 -0500 (EST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <200102132343.MAA05559@s454.cosc.canterbury.ac.nz> References: <200102132332.SAA18696@cj20424-a.reston1.va.home.com> <200102132343.MAA05559@s454.cosc.canterbury.ac.nz> Message-ID: <14985.63260.81788.746125@cj42289-a.reston1.va.home.com> Greg Ewing writes: > How about introducing the following hierarchy: > > CompileTimeError > SyntaxError > LiteralRangeError > > LiteralRangeError could inherit from ValueError as well > if you want. I like this! -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From tim.one at home.com Wed Feb 14 05:13:00 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 23:13:00 -0500 Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Thomas Wouters] > ... what is wrong (in the \x escape example) is the *value* of > something (of the \x escape in question.) If a syntax error was > raised, I would think something was wrong with the syntax. But > the \x is placed in the right spot, inside a string literal. The > string literal itself is placed right. Why would it be a syntax > error ? Oh, why not 
                              
                              . The syntax of an \x escape is "\\" "x" hexdigit hexdigit and to call something that doesn't match that syntax a SyntaxError isn't much of a stretch. Neither is calling it a ValueError. [Guido] > Another solution (borrowing from C): automatically promote int > literals to long if they can't be evaluated as ints. Yes! The user-visible distinction between ints and longs causes more problems than it solves. Would also get us one step closer to punting the incomprehensible "because the grammar implies it" answer to the FAQlet: Yo, Phyton d00dz! What's up with this? >>> x = "-2147483648" >>> int(x) -2147483648 >>> eval(x) Traceback (most recent call last): File "
                              
                              ", line 1, in ? OverflowError: integer literal too large >>> From skip at mojam.com Wed Feb 14 04:56:11 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 13 Feb 2001 21:56:11 -0600 (CST) Subject: [Python-Dev] random.jumpback? Message-ID: <14986.475.685764.347334@beluga.mojam.com> I was adding __all__ to the random module and I noticed this very unpythonic example in the module docstring: >>> g = Random(42) # arbitrary >>> g.random() 0.25420336316883324 >>> g.jumpahead(6953607871644L - 1) # move *back* one >>> g.random() 0.25420336316883324 Presuming backing up the seed is a reasonable thing to do (I haven't got much experience with random numbers), why doesn't the Random class have a jumpback method instead of forcing the user to know the magic number to use with jumpahead? def jumpback(self, n): return self.jumpahead(6953607871644L - n) Skip From skip at mojam.com Wed Feb 14 03:43:21 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 13 Feb 2001 20:43:21 -0600 (CST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: 
                              
                              References: 
                              
                              
                              Message-ID: <14985.61641.213866.206076@beluga.mojam.com> I must admit to being unfamiliar with all the options available. How well does doctest work if the output of an example or test doesn't lend itself to execution at an interactive prompt? Skip From tim.one at home.com Wed Feb 14 06:34:35 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 00:34:35 -0500 Subject: [Python-Dev] random.jumpback? In-Reply-To: <14986.475.685764.347334@beluga.mojam.com> Message-ID: 
                              
                              [Skip Montanaro] > I was adding __all__ to the random module and I noticed this very > unpythonic example in the module docstring: > > >>> g = Random(42) # arbitrary > >>> g.random() > 0.25420336316883324 > >>> g.jumpahead(6953607871644L - 1) # move *back* one > >>> g.random() > 0.25420336316883324 Did you miss the sentence preceding the example, starting "Just for fun"? > Presuming backing up the seed is a reasonable thing to do > ... It isn't -- it's just for fun. > (I haven't got much experience with random numbers), If you did, you would have been howling with joy at how much fun you were having 
                              
                              . From tim.one at home.com Wed Feb 14 07:45:15 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 01:45:15 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102132145.QAA18076@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Tim] > Not to my mind. doctest is intentionally picky about exact matches, > for reasons explained earlier. If the docs for a thing say "integer > division or modulo by zero" is expected, but running it says something > else, the docs are wrong and doctest's primary *purpose* is to point > that out loudly. [Guido] > Of course, this is means that *if* you use doctest, all authoritative > docs should be in the docstring, and not elsewhere. I don't know why you would reach that conclusion. My own Python work in years past had overwhelmingly little to do with anything in the Python distribution, and I surely did put all my docs in my modules. It was my only realistic choice, and doctest grew in part out of that "gotta put everything in one file, cuz one file is all I got" way of working. By allowing to put the docs for a thing right next to the tests for a thing right next to the code for a thing, doctest changed the *nature* of that compromise from a burden to a relative joy. Doesn't mean the docs couldn't or shouldn't be elsewhere, though, unless you assume that only the "authoritative docs" need to be accurate (I prefer that all docs tell the truth 
                              
                              ). I know some people have adapted the guts of doctest to ensuring that their LaTeX and/or HTML Python examples work as advertised too. Cool! The Python Tutorial is eternally out of synch in little ways with what the matching release actually does. > Which brings us back to the eternal question of how to indicate > mark-up in docstrings. I announced a few years ago I was done waiting for mark-up to reach consensus, and was going to just go ahead and write useful docstrings regardless. Never had cause to regret that -- mark-up is the tail wagging the dog, and I don't know why people tolerate it (well, yes I do: "but there's no mark-up defined!" is an excuse to put off writing decent docs! but you really don't need six levels of nested lists-- or even one --to get 99% of the info across). > Is everything connected to everything? when-it's-convenient-to-believe-it-and-a-few-times-even-when-not-ly y'rs - tim From tim.one at home.com Wed Feb 14 07:52:37 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 01:52:37 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14985.61641.213866.206076@beluga.mojam.com> Message-ID: 
                              
                              [Skip] > I must admit to being unfamiliar with all the options available. How > well does doctest work if the output of an example or test doesn't > lend itself to execution at an interactive prompt? If an indication of success/failure can't be produced on stdout, doctest is useless. OTOH, if you have any automatable way whatsoever to test a thing, I'm betting you could dream up a way to print yes or no to stdout accordingly. If not, you probably need to work on something other than your testing strategy first 
                              
                              . From tim.one at home.com Wed Feb 14 10:14:11 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 04:14:11 -0500 Subject: [Python-Dev] failure: 2.1a2 on HP-UX with native compiler In-Reply-To: <14985.53891.987696.686572@sirius.net.home> Message-ID: 
                              
                              [Charles G Waldman] > Allow me to start off with a personal note. OK, but only once per decade (my turn: I found a mole with an unusual color 
                              
                              ). > I am no longer @fnal.gov, I have a new job which is very interesting > and challenging but not particularly Python-related - [my new employer > is geodesic.com] Cool! So give us a copy of Great Circle for free, and in turn we'll let you upgrade their website to Zope for free <0.9 wink>. > ... > Along these lines, I tried building the 2.1a2 version on an SMP HP > box: You are toooo brave, Charles! If you ever manage to get Python to compile on that box, do Guido a huge favor and figure out the right way to close the unending stream of "threads don't work on HP-UX" bugs. Few HP-UX users appear to be systems software developers, and that means we never get a clear picture about what the thread story is there -- except that they don't work (== won't even compile) for many users, and no contributed patch ever applied has managed to stop the complaints. After that, Linux/360 should be a vacation. if-geodesic-can-speed-cold-fusion-by-1200%-just-imagine-what- they-could-for-python-ly y'rs - tim From thomas at xs4all.net Wed Feb 14 10:32:58 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 14 Feb 2001 10:32:58 +0100 Subject: [Python-Dev] failure: 2.1a2 on HP-UX with native compiler In-Reply-To: <14985.53891.987696.686572@sirius.net.home>; from cgw@alum.mit.edu on Tue, Feb 13, 2001 at 06:34:11PM -0600 References: <14985.53891.987696.686572@sirius.net.home> Message-ID: <20010214103257.F4924@xs4all.nl> On Tue, Feb 13, 2001 at 06:34:11PM -0600, Charles G Waldman wrote: > this box has both gcc and the native compiler installed, but not g++. > I tried to configure with the command: > otto:Python-2.1a2$ ./configure --without-gcc > configure: error: installation or configuration problem: C++ compiler cannot create executables. > Seems like the "--without-gcc" flag is being completely ignored! Yes. --without-gcc is only used for the C compiler, not the C++ one. For the C++ compiler, if you do not specify '--with-cxx=...', configure uses the first existing program out of this list: $CCC c++ g++ gcc CC cxx cc++ cl The check to determine whether the chosen compiler actually works is made later, and if it doesn't work, it won't try the next one in the list. The solution is thus to provide a working CXX compiler using --with-cxx=
                              
                              . Two questions for python-dev (in particular autoconf-god Eric -- time to earn your pay! ;-) Is there a reason '$CXX' is not in the list of tested C++ compilers, even before $CCC ? That would allow CXX=c++-compiler ./configure to work. As for the other question: The --without-gcc usage message seems wrong: AC_ARG_WITH(gcc, [ --without-gcc never use gcc], [ Asside from '--without-gcc', you can also use '--with-gcc' and '--with-gcc=
                              
                              '. Is there a specific reason not to document that ? -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From andy at reportlab.com Wed Feb 14 10:49:29 2001 From: andy at reportlab.com (Andy Robinson) Date: Wed, 14 Feb 2001 09:49:29 -0000 Subject: [Python-Dev] Unit Testing in San Diego Message-ID: 
                              
                              The O'Reilly Conference Committee needs proposals about a week ago for the conference in San Diego on July 24th-27th. I think there should be a short talk on unit testing, showing how to use PyUnit, Doctest, and Quixote if they have not all merged into one glorious unified whole by then. I can do this - we've used PyUnit a lot lately - but have other talks I'd rather concentrate on. Is there anyone here who will be there and would like to give such a talk? I'm sure the committee would welcome a submission. Andy Robinson CEO and Chief Architect, ReportLab Inc. From mal at lemburg.com Wed Feb 14 11:19:48 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed, 14 Feb 2001 11:19:48 +0100 Subject: [Python-Dev] SyntaxError for illegal literals References: 
                              
                              
                              <20010214005716.D4924@xs4all.nl> Message-ID: <3A8A5BC4.64298EA6@lemburg.com> Thomas Wouters wrote: > > On Tue, Feb 13, 2001 at 03:11:10PM -0800, Ka-Ping Yee wrote: > > > The strongest reason is that a long file with a typo in a string > > literal somewhere in hundreds of lines of code generates only > > > ValueError: invalid \x escape > > > with no indication to where the error is -- not even which file! > > I realize this could be hacked upon and fixed, but i think it points > > to a general inconsistency that ought to be considered and addressed. > > This has nothing to do with the error being a ValueError, but with some > (compile-time) errors not being promoted to 'full' errors. See > > https://sourceforge.net/patch/?func=detailpatch&patch_id=101782&group_id=5470 > > The same issue came up when importing modules that did 'from foo import *' > in a function scope. Right and I think this touches the core of the problem. SyntaxErrors produce a proper traceback while ValueErrors (and others) just print a single line which doesn't even have the filename or line number. I wonder why the PyErr_PrintEx() (pythonrun.c) error handler only tries to parse SyntaxErrors for .filename and .lineno parameters. Looking at compile.c these should be settable on all exception object (since these are now proper instances). Perhaps lifting the restriction in PyErr_PrintEx() and making the parse_syntax_error() API a little more robust might do the trick. Then the various direct PyErr_SetString() calls in compile.c should be converted to use com_error() instead (if possible). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From ping at lfw.org Wed Feb 14 12:08:29 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Wed, 14 Feb 2001 03:08:29 -0800 (PST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <3A8A5BC4.64298EA6@lemburg.com> Message-ID: 
                              
                              I wrote: > The strongest reason is that a long file with a typo in a string > literal somewhere in hundreds of lines of code generates only > > ValueError: invalid \x escape > > with no indication to where the error is -- not even which file! Thomas Wouters wrote: > This has nothing to do with the error being a ValueError, but with some > (compile-time) errors not being promoted to 'full' errors. See I think they are entirely related. All ValueErrors should be run-time errors; a ValueError should never occur during compilation. The key issue is communicating clearly with the user, and that's just not what ValueError *means*. M.-A. Lemburg wrote: > Right and I think this touches the core of the problem. SyntaxErrors > produce a proper traceback while ValueErrors (and others) just print > a single line which doesn't even have the filename or line number. This follows sensibly from the fact that SyntaxErrors are always compile-time errors (and therefore have no traceback or frame at the level where the error occurred). ValueErrors are usually run-time errors, so .filename and .lineno attributes would be redundant; this information is already available in the associated frame object. > Perhaps lifting the restriction in PyErr_PrintEx() and making the > parse_syntax_error() API a little more robust might do the trick. > Then the various direct PyErr_SetString() calls in compile.c > should be converted to use com_error() instead (if possible). That sounds like a significant amount of work, and i'm not sure it's the right answer. If we just clarify the boundary by making sure make sure that all, and only, compile-time errors are SyntaxErrors, everything would work properly and the meaning of the various exception classes would be clearer. The only exceptions that don't currently conform, as far as i know, have to do with invalid literals. -- ?!ng From ping at lfw.org Wed Feb 14 12:21:51 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Wed, 14 Feb 2001 03:21:51 -0800 (PST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <20010214005716.D4924@xs4all.nl> Message-ID: 
                              
                              On Wed, 14 Feb 2001, Thomas Wouters wrote: > > 3. A ValueError means "i got a value that wasn't allowed or > > expected here". That is not at all what is happening. > > There is *no* defined value at all. It's not that there > > was a value and it was wrong -- the value was never even > > brought into existence. > > Not quite true. It wasn't *compiled*, but it's a literal, so it does exist. > The problem is not the value of a compiled \x escape, but the value after > the \x. No, it doesn't exist -- not in the Python world, anyway. There is no Python object corresponding to the literal. That's what i meant by not existing. I think this is an okay choice of meaning for "exist", since, after all, the point of the language is to abstract away lower levels so programmers can think in that higher-level "Python world". > > I hope you will agree with me that solving only #4 by changing > > ValueErrors so they behave a little more like SyntaxErrors in > > certain particular situations isn't the best solution. > > I don't, really. The name 'ValueError' is exactly right: what is wrong (in > the \x escape example) is the *value* of something (of the \x escape in > question.) The previous paragraph pretty much answers this, but i'll clarify. My understanding of ValueError, as it holds in all other situations but this one, is that a Python value of the right type was supplied but it was otherwise wrong -- illegal, or unexpected, or something of that sort. The documentation on the exceptions module says: ValueError Raised when a built-in operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError. That doesn't apply to "\xgh" or 1982391879487124. > If a syntax error was raised, I would think something was wrong > with the syntax. But there is. "\x45" is syntax for the letter E. It generates the semantics "the character object with ordinal 69 (corresponding to the uppercase letter E in ASCII)". "\xgh" doesn't generate any semantics -- we stop before we get there, because the syntax is wrong. -- ?!ng From ping at lfw.org Wed Feb 14 12:31:34 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Wed, 14 Feb 2001 03:31:34 -0800 (PST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <200102132332.SAA18696@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              On Tue, 13 Feb 2001, Guido van Rossum wrote: > All good points, except that I still find it hard to flag overflow > errors as syntax errors, especially since overflow is platform > defined. I know it may seem weird. I tend to see it as a consequence of the language definition, though, not as the wrong choice of error. If you had to write a truly platform-independent Python language definition (a worthwhile endeavour, by the way, especially given that there are already at least CPython, JPython, and stackless), the decision about this would have to be made there. > On one platform, 1000000000000 is fine; on another it's a > SyntaxError. That could be confusing. So far, Python is effectively defined in such a way that 100000000000 has a meaning on one platform and has no meaning on another. 
                              
                              So, yeah, that's the way it is. > Another solution (borrowing from C): automatically promote int > literals to long if they can't be evaluated as ints. Quite reasonable, yes. But i'd go further than that. I think everyone so far has been in agreement that the division between ints and long ints should eventually be abolished, and we're just waiting for someone brave enough to come along and make it happen. I know i've got my fingers crossed. :) (And maybe after we deprecate 'L', we can deprecate capital 'J' on numbers and 'R', 'U' on strings too...) toowtdi-ly yours, -- ?!ng From ping at lfw.org Wed Feb 14 12:36:54 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Wed, 14 Feb 2001 03:36:54 -0800 (PST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <200102132343.MAA05559@s454.cosc.canterbury.ac.nz> Message-ID: 
                              
                              On Wed, 14 Feb 2001, Greg Ewing wrote: > How about introducing the following hierarchy: > > CompileTimeError > SyntaxError > LiteralRangeError > > LiteralRangeError could inherit from ValueError as well > if you want. I suppose that's all right, and i wouldn't complain, but i don't think it's all that necessary either. Compile-time errors *are* syntax errors. What else could they be? (Aside from fatal errors or limitations of the compiler implementation, that is, but again that's outside of the abstraction we're presenting to the Python user.) Think of it this way: if there's a problem with your Python program, it's either a problem with *how* it expresses something (syntax), or with *what* it expresses (semantics). The syntactic errors occur at compile-time and the semantic errors occur at run-time. -- ?!ng From mal at lemburg.com Wed Feb 14 13:00:42 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed, 14 Feb 2001 13:00:42 +0100 Subject: [Python-Dev] SyntaxError for illegal literals References: 
                              
                              Message-ID: <3A8A736A.917F7D41@lemburg.com> Ka-Ping Yee wrote: > > I wrote: > > The strongest reason is that a long file with a typo in a string > > literal somewhere in hundreds of lines of code generates only > > > > ValueError: invalid \x escape > > > > with no indication to where the error is -- not even which file! > > Thomas Wouters wrote: > > This has nothing to do with the error being a ValueError, but with some > > (compile-time) errors not being promoted to 'full' errors. See > > I think they are entirely related. All ValueErrors should be run-time > errors; a ValueError should never occur during compilation. The key > issue is communicating clearly with the user, and that's just not what > ValueError *means*. > > M.-A. Lemburg wrote: > > Right and I think this touches the core of the problem. SyntaxErrors > > produce a proper traceback while ValueErrors (and others) just print > > a single line which doesn't even have the filename or line number. > > This follows sensibly from the fact that SyntaxErrors are always > compile-time errors (and therefore have no traceback or frame at the > level where the error occurred). ValueErrors are usually run-time > errors, so .filename and .lineno attributes would be redundant; > this information is already available in the associated frame object. Those attributes are added to the error object by set_error_location() in compile.c. Since the error objects are Python instances, the function will set those attribute on any error which the compiler raises and IMHO, this would be a good thing. > > Perhaps lifting the restriction in PyErr_PrintEx() and making the > > parse_syntax_error() API a little more robust might do the trick. > > Then the various direct PyErr_SetString() calls in compile.c > > should be converted to use com_error() instead (if possible). > > That sounds like a significant amount of work, and i'm not sure it's > the right answer. Changing all compile time errors to SyntaxError requires much the same amount of work... you'd have to either modify the code to use com_error() or check for errors and then redirect them to com_error() (e.g. for codec errors). > If we just clarify the boundary by making sure > make sure that all, and only, compile-time errors are SyntaxErrors, > everything would work properly and the meaning of the various > exception classes would be clearer. The only exceptions that don't > currently conform, as far as i know, have to do with invalid literals. Well, there are also system and memory errors and the codecs are free to raise any other kind of error as well. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From guido at digicool.com Wed Feb 14 14:52:27 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 14 Feb 2001 08:52:27 -0500 Subject: [Python-Dev] random.jumpback? In-Reply-To: Your message of "Wed, 14 Feb 2001 00:34:35 EST." 
                              
                              References: 
                              
                              Message-ID: <200102141352.IAA22006@cj20424-a.reston1.va.home.com> > [Skip Montanaro] > > I was adding __all__ to the random module and I noticed this very > > unpythonic example in the module docstring: > > > > >>> g = Random(42) # arbitrary > > >>> g.random() > > 0.25420336316883324 > > >>> g.jumpahead(6953607871644L - 1) # move *back* one > > >>> g.random() > > 0.25420336316883324 [Tim] > Did you miss the sentence preceding the example, starting "Just for fun"? In that vein, the example isn't compatible with doctest, is it? --Guido van Rossum (home page: http://www.python.org/~guido/) From sjoerd at oratrix.nl Wed Feb 14 14:56:16 2001 From: sjoerd at oratrix.nl (Sjoerd Mullender) Date: Wed, 14 Feb 2001 14:56:16 +0100 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: Your message of Tue, 13 Feb 2001 17:57:03 -0800. <14985.58539.114838.36680@mace.lucasdigital.com> References: <14985.46047.226447.573927@mace.lucasdigital.com> 
                              
                              <14985.58539.114838.36680@mace.lucasdigital.com> Message-ID: <20010214135617.A99853021C2@bireme.oratrix.nl> As an extra datapoint: I just tried this (4 * math.exp(-0.5)) on my SGI O2 and on our SGI file server with the current CVS version of Python, compiled with -O. I don't get a crash. I am running IRIX 6.5.10m on the O2 and 6.5.2m on the server. What version are you running? On Tue, Feb 13 2001 Flying Cougar Burnette wrote: > Tim Peters writes: > | > | > now neither does. shall I turn opts back on and try a few more > | > cases? > | > | Yes, please, one more: > | > | 4.0 * 3.1 > | > | Or, if that works, go back to the failing > | > | 4.0 * math.exp(-0.5) > > both of these work, but changing the 4.0 to an integer 4 produces the > bus error. so it is definitely a conversion to double/float thats > the problem. > > | > | In any failing case, can you jump into a debubber and get a stack trace? > > sure. I've included an entire dbx session at the end of this mail. > > | > | Do you happen to have > | > | WANT_SIGFPE_HANDLER > | > | #define'd when you compile Python on this platform? If so, it complicates > | the code a lot. I wonder about that because you got a "bus error", and when > | WANT_SIGFPE_HANDLER is #defined we get a whole pile of ugly setjmp/longjmp > | code that doesn't show up on my box. > > a peek at config.h shows the WANT_SIGFPE_HANDLER define commented > out. should I turn it on and see what happens? > > > | > | Another tack, as a temporary workaround: try disabling optimization only > | for Objects/floatobject.c. That will probably fix the problem, and if so > | that's enough of a workaround to get you unstuck while pursuing these other > | irritations. > > this one works just fine. workarounds aren't a problem for me right > now since I'm in no hurry to get this version in use here. I'm just > trying to help debug this version for irix users in general. > > > ------------%< snip %<----------------------%< snip %<------------ > > (tommy at mace)/u0/tommy/pycvs/python/dist/src$ dbx python > dbx version 7.3 65959_Jul11 patchSG0003841 Jul 11 2000 02:29:30 > Executable /usr/u0/tommy/pycvs/python/dist/src/python > (dbx) run > Process 563746 (python) started > Python 2.1a2 (#6, Feb 13 2001, 17:43:32) [C] on irix6 > Type "copyright", "credits" or "license" for more information. > >>> 3 * 4.0 > 12.0 > >>> import math > >>> 4 * math.exp(-.5) > Process 563746 (python) stopped on signal SIGBUS: Bus error (default) at [float_mul:383 +0x4,0x1004c158] > 383 CONVERT_TO_DOUBLE(v, a); > (dbx) l > >* 383 CONVERT_TO_DOUBLE(v, a); > 384 CONVERT_TO_DOUBLE(w, b); > 385 PyFPE_START_PROTECT("multiply", return 0) > 386 a = a * b; > 387 PyFPE_END_PROTECT(a) > 388 return PyFloat_FromDouble(a); > 389 } > 390 > 391 static PyObject * > 392 float_div(PyObject *v, PyObject *w) > (dbx) t > > 0 float_mul(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/floatobject.c":383, 0x1004c158] > 1 binary_op1(0x100b69fc, 0x10116788, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":337, 0x1003ac5c] > 2 binary_op(0x100b69fc, 0x10116788, 0x8, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":373, 0x1003ae70] > 3 PyNumber_Multiply(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":544, 0x1003b5a4] > 4 eval_code2(0x1012c688, 0x0, 0xffffffec, 0x0, 0x0, 0x0, 0x0, 0x0) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":896, 0x10034a54] > 5 PyEval_EvalCode(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":336, 0x10031768] > 6 run_node(0x100f88c0, 0x10116788, 0x0, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":931, 0x10040444] > 7 PyRun_InteractiveOne(0x0, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":540, 0x1003f1f0] > 8 PyRun_InteractiveLoop(0xfb4a398, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":486, 0x1003ef84] > 9 PyRun_AnyFileEx(0xfb4a398, 0x100b1878, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":461, 0x1003eeac] > 10 Py_Main(0x1, 0x0, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/main.c":292, 0x1000bba4] > 11 main(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/python.c":10, 0x1000b7bc] > More (n if no)?y > 12 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x1000b558] > (dbx) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > -- Sjoerd Mullender 
                              
                              From moshez at zadka.site.co.il Wed Feb 14 17:47:17 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Wed, 14 Feb 2001 18:47:17 +0200 (IST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102132145.QAA18076@cj20424-a.reston1.va.home.com> References: <200102132145.QAA18076@cj20424-a.reston1.va.home.com>, 
                              
                              Message-ID: <20010214164717.24AA1A840@darjeeling.zadka.site.co.il> On Tue, 13 Feb 2001 16:45:56 -0500, Guido van Rossum 
                              
                              wrote: > Of course, this is means that *if* you use doctest, all authoritative > docs should be in the docstring, and not elsewhere. Which brings us > back to the eternal question of how to indicate mark-up in > docstrings. Is everything connected to everything? No, but a lot of things are connected to documentation. As someone who works primarily in Perl nowadays, and hates it, I must say that as horrible and unaesthetic pod is, having perldoc package::module Just work is worth everything -- I've marked everything I wrote that way, and I can't begin to explain how much it helps. I'm slowly starting to think that the big problem with Python documentation is that you didn't pronounce. So, if some publisher needs to work harder to make dead-trees copies, it's fine by me, and even if the output looks a bit less "professional" it's also fine by me, as long as documentation is always in the same format, and always accessible by the same command. Consider this an offer to help to port (manually, if needs be) Python's current documentation. We had a DevDay, we have a sig, we have a PEP. None of this seems to help -- what we need is a BDFL's pronouncement, even if it's on the worst solution possibly imaginable. -- For public key: finger moshez at debian.org | gpg --import "Debian -- What your mother would use if it was 20 times easier" LUKE: Is Perl better than Python? YODA: No... no... no. Quicker, easier, more seductive. From moshez at zadka.site.co.il Wed Feb 14 17:57:35 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Wed, 14 Feb 2001 18:57:35 +0200 (IST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <20010214165735.C6AB4A840@darjeeling.zadka.site.co.il> On Tue, 13 Feb 2001 20:24:00 -0500, "Tim Peters" 
                              
                              wrote: > Not me -- there's nothing in them that I as a potential user don't need to > know. But then I think the Library docs are too terse in general. Indeed, > Fredrick makes part of his living selling a 300-page book supplying > desperately needed Library examples <0.5 wink>. I'm sorry, Tim, that's just too true. I want to explain my view about how it happened (I wrote some of them, and if you find a particularily terse one, just assume it's me) -- I write tersely. My boss yelled at me when doing this at work, and I redid all my internal documentation -- doubled the line count, beefed up with examples, etc. He actually submitted a bug in the internal bug tracking system to get me to do that ;-) So, I suggest you do the same -- there's no excuse for terseness, other then not-having-time, so it's really important that bugs like that are files. Something like "documentation for xxxlib is too terse". I can't promise I'll fix all these bugs, but I can try ;-) -- For public key: finger moshez at debian.org | gpg --import "Debian -- What your mother would use if it was 20 times easier" LUKE: Is Perl better than Python? YODA: No... no... no. Quicker, easier, more seductive. From fdrake at acm.org Wed Feb 14 18:40:47 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Wed, 14 Feb 2001 12:40:47 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010214165735.C6AB4A840@darjeeling.zadka.site.co.il> References: 
                              
                              <20010214165735.C6AB4A840@darjeeling.zadka.site.co.il> Message-ID: <14986.49951.471539.196962@cj42289-a.reston1.va.home.com> Moshe Zadka writes: > so it's really important that bugs like that are files. Something like > "documentation for xxxlib is too terse". I can't promise I'll fix all these > bugs, but I can try ;-) It would also be useful to tell what additional information you were looking for. We can probably find additional stuff to write on a lot of these, but that doesn't mean we'll interpret "too terse" in the most useful way. ;-) -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From tommy at ilm.com Wed Feb 14 18:57:24 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Wed, 14 Feb 2001 09:57:24 -0800 (PST) Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: <20010214135617.A99853021C2@bireme.oratrix.nl> References: <14985.46047.226447.573927@mace.lucasdigital.com> 
                              
                              <14985.58539.114838.36680@mace.lucasdigital.com> <20010214135617.A99853021C2@bireme.oratrix.nl> Message-ID: <14986.49383.668942.359843@mace.lucasdigital.com> 'uname -a' tells me I'm running plain old 6.5 on my R10k O2 with version 7.3.1.1m of the sgi compiler. Which version of the compiler do you have? That might be the real culprit here. in fact... I just hopped onto a co-worker's machine that has version 7.3.1.2m of the compiler, remade everything, and the problem is gone. I think we can chalk this up to a compiler bug and take no further action. Thanks for listening... Sjoerd Mullender writes: | As an extra datapoint: | | I just tried this (4 * math.exp(-0.5)) on my SGI O2 and on our SGI | file server with the current CVS version of Python, compiled with -O. | I don't get a crash. | | I am running IRIX 6.5.10m on the O2 and 6.5.2m on the server. What | version are you running? | | On Tue, Feb 13 2001 Flying Cougar Burnette wrote: | | > Tim Peters writes: | > | | > | > now neither does. shall I turn opts back on and try a few more | > | > cases? | > | | > | Yes, please, one more: | > | | > | 4.0 * 3.1 | > | | > | Or, if that works, go back to the failing | > | | > | 4.0 * math.exp(-0.5) | > | > both of these work, but changing the 4.0 to an integer 4 produces the | > bus error. so it is definitely a conversion to double/float thats | > the problem. | > | > | | > | In any failing case, can you jump into a debubber and get a stack trace? | > | > sure. I've included an entire dbx session at the end of this mail. | > | > | | > | Do you happen to have | > | | > | WANT_SIGFPE_HANDLER | > | | > | #define'd when you compile Python on this platform? If so, it complicates | > | the code a lot. I wonder about that because you got a "bus error", and when | > | WANT_SIGFPE_HANDLER is #defined we get a whole pile of ugly setjmp/longjmp | > | code that doesn't show up on my box. | > | > a peek at config.h shows the WANT_SIGFPE_HANDLER define commented | > out. should I turn it on and see what happens? | > | > | > | | > | Another tack, as a temporary workaround: try disabling optimization only | > | for Objects/floatobject.c. That will probably fix the problem, and if so | > | that's enough of a workaround to get you unstuck while pursuing these other | > | irritations. | > | > this one works just fine. workarounds aren't a problem for me right | > now since I'm in no hurry to get this version in use here. I'm just | > trying to help debug this version for irix users in general. | > | > | > ------------%< snip %<----------------------%< snip %<------------ | > | > (tommy at mace)/u0/tommy/pycvs/python/dist/src$ dbx python | > dbx version 7.3 65959_Jul11 patchSG0003841 Jul 11 2000 02:29:30 | > Executable /usr/u0/tommy/pycvs/python/dist/src/python | > (dbx) run | > Process 563746 (python) started | > Python 2.1a2 (#6, Feb 13 2001, 17:43:32) [C] on irix6 | > Type "copyright", "credits" or "license" for more information. | > >>> 3 * 4.0 | > 12.0 | > >>> import math | > >>> 4 * math.exp(-.5) | > Process 563746 (python) stopped on signal SIGBUS: Bus error (default) at [float_mul:383 +0x4,0x1004c158] | > 383 CONVERT_TO_DOUBLE(v, a); | > (dbx) l | > >* 383 CONVERT_TO_DOUBLE(v, a); | > 384 CONVERT_TO_DOUBLE(w, b); | > 385 PyFPE_START_PROTECT("multiply", return 0) | > 386 a = a * b; | > 387 PyFPE_END_PROTECT(a) | > 388 return PyFloat_FromDouble(a); | > 389 } | > 390 | > 391 static PyObject * | > 392 float_div(PyObject *v, PyObject *w) | > (dbx) t | > > 0 float_mul(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/floatobject.c":383, 0x1004c158] | > 1 binary_op1(0x100b69fc, 0x10116788, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":337, 0x1003ac5c] | > 2 binary_op(0x100b69fc, 0x10116788, 0x8, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":373, 0x1003ae70] | > 3 PyNumber_Multiply(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":544, 0x1003b5a4] | > 4 eval_code2(0x1012c688, 0x0, 0xffffffec, 0x0, 0x0, 0x0, 0x0, 0x0) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":896, 0x10034a54] | > 5 PyEval_EvalCode(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":336, 0x10031768] | > 6 run_node(0x100f88c0, 0x10116788, 0x0, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":931, 0x10040444] | > 7 PyRun_InteractiveOne(0x0, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":540, 0x1003f1f0] | > 8 PyRun_InteractiveLoop(0xfb4a398, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":486, 0x1003ef84] | > 9 PyRun_AnyFileEx(0xfb4a398, 0x100b1878, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":461, 0x1003eeac] | > 10 Py_Main(0x1, 0x0, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/main.c":292, 0x1000bba4] | > 11 main(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/python.c":10, 0x1000b7bc] | > More (n if no)?y | > 12 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x1000b558] | > (dbx) | > | > _______________________________________________ | > Python-Dev mailing list | > Python-Dev at python.org | > http://mail.python.org/mailman/listinfo/python-dev | > | | -- Sjoerd Mullender 
                              
                              From tim.one at home.com Wed Feb 14 21:02:44 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 15:02:44 -0500 Subject: [Python-Dev] random.jumpback? In-Reply-To: <200102141352.IAA22006@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Skip Montanaro] >>> I was adding __all__ to the random module and I noticed this very >>> unpythonic example in the module docstring: >>> >>> >>> g = Random(42) # arbitrary >>> >>> g.random() >>> 0.25420336316883324 >>> >>> g.jumpahead(6953607871644L - 1) # move *back* one >>> >>> g.random() >>> 0.25420336316883324 [Tim] >> Did you miss the sentence preceding the example, starting "Just >> for fun"? [Guido] > In that vein, the example isn't compatible with doctest, is it? I'm not sure what you're asking. The example *works* under doctest, although random.py is not a doctest'ed module (it has an "eyeball test" at the end, and you have to be an expert to guess whether or not "it worked" from staring at the output -- not my doing, and way non-trivial to automate). So it's compatible in the "it works" sense, although it's vulnerable to x-platform fp output vagaries in the last few bits. If random.py ever gets doctest'ed, I'll fix that. Or maybe you're saying that a "just for fun" example doesn't need to be accurate? I'd disagree with that, but am not sure that's what you're saying, so won't disagree just yet 
                              
                              . From fdrake at users.sourceforge.net Wed Feb 14 22:04:29 2001 From: fdrake at users.sourceforge.net (Fred L. Drake) Date: Wed, 14 Feb 2001 13:04:29 -0800 Subject: [Python-Dev] [development doc updates] Message-ID: 
                              
                              The development version of the documentation has been updated: http://python.sourceforge.net/devel-docs/ From fredrik at effbot.org Wed Feb 14 22:14:27 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Wed, 14 Feb 2001 22:14:27 +0100 Subject: [Python-Dev] threads and gethostbyname Message-ID: <041201c096cb$1f46e040$e46940d5@hagrid> We have a Tkinter-based application that does DNS lookups (using socket.gethostbyname) in a background thread. Under 1.5.2, this worked without a hitch. However, under 2.0, the same program tends to lock up on some computers. I'm not 100% sure (it's a bit hard to debug), but it looks like a global lock problem... Any ideas? Is this supposed to work at all? Cheers /F From skip at mojam.com Wed Feb 14 22:24:50 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 14 Feb 2001 15:24:50 -0600 (CST) Subject: [Python-Dev] random.jumpback? In-Reply-To: 
                              
                              References: <200102141352.IAA22006@cj20424-a.reston1.va.home.com> 
                              
                              Message-ID: <14986.63394.543321.783056@beluga.mojam.com> [Skip] I was adding __all__ to the random module and I noticed this very unpythonic example in the module docstring: [Tim] Did you miss the sentence preceding the example, starting "Just for fun"? I did, yes. [Guido] In that vein, the example isn't compatible with doctest, is it? [Tim] I'm not sure what you're asking. I interpreted Guido's comment to mean, "why include a useless example in documentation?" I guess that was my implicit assumption as well (again, ignoring the missed "just for fun" quote). Either it's a useful example embedded in the documentation or it's a test case that is perhaps not likely to be useful to an end user in which case it should be accessed via the module's __test__ dictionary. guido-did-i-channel-you-properly-ly? yr's, Skip From mwh21 at cam.ac.uk Wed Feb 14 23:36:18 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 14 Feb 2001 22:36:18 +0000 Subject: [Python-Dev] python-dev summaries? Message-ID: 
                              
                              I notice that it's nearly a fortnight since AMK's last summary. I've started to put together a sumamry of the last two weeks, but I thought I'd ask first if anyone else was planning to do the same. I'd gladly concede the tediu^Wbragging rights to someone else, although I would like the chance get something out if the evening I spent writing code to do things like this: Number of articles in summary: 495 80 | ]|[ | ]|[ | ]|[ | ]|[ | ]|[ ]|[ 60 | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ 40 | ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 20 | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 0 +-029-067-039-037-080-048-020-009-040-021-008-030-043-024 Thu 01| Sat 03| Mon 05| Wed 07| Fri 09| Sun 11| Tue 13| Fri 02 Sun 04 Tue 06 Thu 08 Sat 10 Mon 12 Wed 14 If noone else is planning on doing a sumamry, I'll post a draft for comments sometime tomorrow. Cheers, M. -- I'm sorry, was my bias showing again? :-) -- William Tanksley, 13 May 2000 From tim.one at home.com Thu Feb 15 00:26:14 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 18:26:14 -0500 Subject: [Python-Dev] random.jumpback? In-Reply-To: <14986.63394.543321.783056@beluga.mojam.com> Message-ID: 
                              
                              [Skip] > I interpreted Guido's comment to mean, "why include a useless example in > documentation?" I guess that was my implicit assumption as well (again, > ignoring the missed "just for fun" quote). Either it's a useful example > embedded in the documentation or it's a test case that is perhaps not > likely to be useful to an end user in which case it should be accessed > via the module's __test__ dictionary. The example is not useful in practice, but is useful pedagogically, for someone who reads the example *in context*: + It makes concrete that .jumpahead() is fast for even monstrously large arguments (try it! it didn't even make you curious?). + It makes concrete that the period of the RNG definitely can be exhausted (something which earlier docstring text warned about in the context of threads, but abstractly). + It concretely demonstrates that the true period is at worst a factor of the documented period, something paranoid users want assurance about because they know from bitter experience that documented periods are often wrong (indeed, Wichmann and Hill made a bogus claim about the period of *this* generator when they first introduced it). A knowledgable user can build on that example to prove to themself quickly that the period is exactly as documented. + If anyone is under the illusion (and many are) that this kind of RNG is good for crypto work, the demonstrated trivial ease with which .jumpahead can move to any point in the sequence-- even trillions of elements ahead --should give them strong cause for healthy doubt. Cranking out cookies is useful, but teaching the interested reader something about the nature of the cookie machine is also useful, albeit in a different sense. unrepentantly y'rs - tim From jeremy at alum.mit.edu Wed Feb 14 22:32:10 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 14 Feb 2001 16:32:10 -0500 (EST) Subject: [Python-Dev] random.jumpback? In-Reply-To: 
                              
                              References: <14986.63394.543321.783056@beluga.mojam.com> 
                              
                              Message-ID: <14986.63834.23401.827764@w221.z064000254.bwi-md.dsl.cnc.net> I thought it was an excellent example for exactly the reasons Tim mentioned. I didn't try it, but I did wonder how long it would take :-). Jeremy From tim.one at home.com Thu Feb 15 09:00:49 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 15 Feb 2001 03:00:49 -0500 Subject: [Python-Dev] python-dev summaries? In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Michael Hudson, graduates from bytecodes to ASCII art] > ... > If noone else is planning on doing a sumamry, I'll post a draft for > comments sometime tomorrow. 1. If you solicit comments, it will be 3 months of debate before you get to post the thing <0.8 wink>. Just Do It. 2. Bless you! to-be-safe-simply-concatenate-all-the-msgs-and-post-the-whole- blob-without-comment-ly y'rs - tim From thomas at xs4all.net Thu Feb 15 09:05:51 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Thu, 15 Feb 2001 09:05:51 +0100 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010214165735.C6AB4A840@darjeeling.zadka.site.co.il>; from moshez@zadka.site.co.il on Wed, Feb 14, 2001 at 06:57:35PM +0200 References: 
                              
                              <20010214165735.C6AB4A840@darjeeling.zadka.site.co.il> Message-ID: <20010215090551.J4924@xs4all.nl> On Wed, Feb 14, 2001 at 06:57:35PM +0200, Moshe Zadka wrote: > On Tue, 13 Feb 2001 20:24:00 -0500, "Tim Peters" 
                              
                              wrote: > > Not me -- there's nothing in them that I as a potential user don't need to > > know. But then I think the Library docs are too terse in general. Indeed, > > Fredrick makes part of his living selling a 300-page book supplying > > desperately needed Library examples <0.5 wink>. > I'm sorry, Tim, that's just too true. You should be appologizing to Fred, not Tim :) While I agree with the both of you, I'm not sure if expanding the library reference is going to help the problem. I think what's missing is a library *tutorial*. The reference is exactly that, a reference, and if we expand the reference we'll end up cursing it ourself, should we ever need it. (okay, so noone here needs the reference anymore 
                              
                              except me, but when looking at the reference, I like the terse descriptions of the modules. They're just reminders anyway.) I remember when I'd finished the Python tutorial and wondered where to go next. I tried reading the library reference, but it was boring and most of it not interesting (since it isn't built up to go from useful/common -> rare, but just a list of all modules ordered by 'service'.) I ended up doing the slow and cheap version of Fredrik's book: reading python-list ;) I'll write the library tutorial once I finish the 'from-foo-import-* considered harmful' chapter ;-) -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Thu Feb 15 09:35:00 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 15 Feb 2001 03:35:00 -0500 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: <14986.49383.668942.359843@mace.lucasdigital.com> Message-ID: 
                              
                              [Flying Cougar Burnette] > 'uname -a' tells me I'm running plain old 6.5 on my R10k O2 with > version 7.3.1.1m of the sgi compiler. > ... > I just hopped onto a co-worker's machine that has version 7.3.1.2m of > the compiler, remade everything, and the problem is gone. Oh, of course. Why didn't you say so? Micro-micro version 7.3.1.2m of the SGI compiler fixed a bus error when doing int->float conversion. What? You don't believe me? Harrumph -- you just proved it 
                              
                              . thanks-for-playing-and-pick-up-a-fabulous-prize-at-the-door-ly y'rs - tim From sjoerd at oratrix.nl Thu Feb 15 09:42:35 2001 From: sjoerd at oratrix.nl (Sjoerd Mullender) Date: Thu, 15 Feb 2001 09:42:35 +0100 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: Your message of Wed, 14 Feb 2001 09:57:24 -0800. <14986.49383.668942.359843@mace.lucasdigital.com> References: <14985.46047.226447.573927@mace.lucasdigital.com> 
                              
                              <14985.58539.114838.36680@mace.lucasdigital.com> <20010214135617.A99853021C2@bireme.oratrix.nl> <14986.49383.668942.359843@mace.lucasdigital.com> Message-ID: <20010215084236.B1D823021C2@bireme.oratrix.nl> I have compiler version 7.2.1.3m om my O2 and 7.2.1 on the server. It does indeed sound like a compiler problem, so maybe it's time to do an upgrade... On Wed, Feb 14 2001 Flying Cougar Burnette wrote: > > 'uname -a' tells me I'm running plain old 6.5 on my R10k O2 with > version 7.3.1.1m of the sgi compiler. Which version of the compiler > do you have? That might be the real culprit here. in fact... > > I just hopped onto a co-worker's machine that has version 7.3.1.2m of > the compiler, remade everything, and the problem is gone. > > I think we can chalk this up to a compiler bug and take no further > action. Thanks for listening... > > > Sjoerd Mullender writes: > | As an extra datapoint: > | > | I just tried this (4 * math.exp(-0.5)) on my SGI O2 and on our SGI > | file server with the current CVS version of Python, compiled with -O. > | I don't get a crash. > | > | I am running IRIX 6.5.10m on the O2 and 6.5.2m on the server. What > | version are you running? > | > | On Tue, Feb 13 2001 Flying Cougar Burnette wrote: > | > | > Tim Peters writes: > | > | > | > | > now neither does. shall I turn opts back on and try a few more > | > | > cases? > | > | > | > | Yes, please, one more: > | > | > | > | 4.0 * 3.1 > | > | > | > | Or, if that works, go back to the failing > | > | > | > | 4.0 * math.exp(-0.5) > | > > | > both of these work, but changing the 4.0 to an integer 4 produces the > | > bus error. so it is definitely a conversion to double/float thats > | > the problem. > | > > | > | > | > | In any failing case, can you jump into a debubber and get a stack trace? > | > > | > sure. I've included an entire dbx session at the end of this mail. > | > > | > | > | > | Do you happen to have > | > | > | > | WANT_SIGFPE_HANDLER > | > | > | > | #define'd when you compile Python on this platform? If so, it complicates > | > | the code a lot. I wonder about that because you got a "bus error", and when > | > | WANT_SIGFPE_HANDLER is #defined we get a whole pile of ugly setjmp/longjmp > | > | code that doesn't show up on my box. > | > > | > a peek at config.h shows the WANT_SIGFPE_HANDLER define commented > | > out. should I turn it on and see what happens? > | > > | > > | > | > | > | Another tack, as a temporary workaround: try disabling optimization only > | > | for Objects/floatobject.c. That will probably fix the problem, and if so > | > | that's enough of a workaround to get you unstuck while pursuing these other > | > | irritations. > | > > | > this one works just fine. workarounds aren't a problem for me right > | > now since I'm in no hurry to get this version in use here. I'm just > | > trying to help debug this version for irix users in general. > | > > | > > | > ------------%< snip %<----------------------%< snip %<------------ > | > > | > (tommy at mace)/u0/tommy/pycvs/python/dist/src$ dbx python > | > dbx version 7.3 65959_Jul11 patchSG0003841 Jul 11 2000 02:29:30 > | > Executable /usr/u0/tommy/pycvs/python/dist/src/python > | > (dbx) run > | > Process 563746 (python) started > | > Python 2.1a2 (#6, Feb 13 2001, 17:43:32) [C] on irix6 > | > Type "copyright", "credits" or "license" for more information. > | > >>> 3 * 4.0 > | > 12.0 > | > >>> import math > | > >>> 4 * math.exp(-.5) > | > Process 563746 (python) stopped on signal SIGBUS: Bus error (default) at [float_mul:383 +0x4,0x1004c158] > | > 383 CONVERT_TO_DOUBLE(v, a); > | > (dbx) l > | > >* 383 CONVERT_TO_DOUBLE(v, a); > | > 384 CONVERT_TO_DOUBLE(w, b); > | > 385 PyFPE_START_PROTECT("multiply", return 0) > | > 386 a = a * b; > | > 387 PyFPE_END_PROTECT(a) > | > 388 return PyFloat_FromDouble(a); > | > 389 } > | > 390 > | > 391 static PyObject * > | > 392 float_div(PyObject *v, PyObject *w) > | > (dbx) t > | > > 0 float_mul(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/floatobject.c":383, 0x1004c158] > | > 1 binary_op1(0x100b69fc, 0x10116788, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":337, 0x1003ac5c] > | > 2 binary_op(0x100b69fc, 0x10116788, 0x8, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":373, 0x1003ae70] > | > 3 PyNumber_Multiply(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":544, 0x1003b5a4] > | > 4 eval_code2(0x1012c688, 0x0, 0xffffffec, 0x0, 0x0, 0x0, 0x0, 0x0) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":896, 0x10034a54] > | > 5 PyEval_EvalCode(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":336, 0x10031768] > | > 6 run_node(0x100f88c0, 0x10116788, 0x0, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":931, 0x10040444] > | > 7 PyRun_InteractiveOne(0x0, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":540, 0x1003f1f0] > | > 8 PyRun_InteractiveLoop(0xfb4a398, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":486, 0x1003ef84] > | > 9 PyRun_AnyFileEx(0xfb4a398, 0x100b1878, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":461, 0x1003eeac] > | > 10 Py_Main(0x1, 0x0, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/main.c":292, 0x1000bba4] > | > 11 main(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/python.c":10, 0x1000b7bc] > | > More (n if no)?y > | > 12 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x1000b558] > | > (dbx) > | > > | > _______________________________________________ > | > Python-Dev mailing list > | > Python-Dev at python.org > | > http://mail.python.org/mailman/listinfo/python-dev > | > > | > | -- Sjoerd Mullender 
                              
                              > -- Sjoerd Mullender 
                              
                              From tim.one at home.com Thu Feb 15 10:07:38 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 15 Feb 2001 04:07:38 -0500 Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Ka-Ping Yee] > ... > The only exceptions that don't currently conform, as far as i > know, have to do with invalid literals. Pretty much, but nothing's *that* easy. Other examples: + If there are too many nested blocks, it raises SystemError(!). + MemoryError is raised if a dotted name is too long. + OverflowError is raised if a string is too long. Note that those don't have to do with syntax, they're arbitrary implementation limits. So that's the rule: raise SystemError if something is bigger than 20 MemoryError if it's bigger than 1000 OverflowError if it's bigger than an int Couldn't be clearer 
                              
                              . + SystemErrors are raised in many other places in the role of internal assertions failing. Those needn't be changed. From andy at reportlab.com Thu Feb 15 11:07:11 2001 From: andy at reportlab.com (Andy Robinson) Date: Thu, 15 Feb 2001 10:07:11 -0000 Subject: [Python-Dev] Documentation Tools (was Unit Testing) In-Reply-To: 
                              
                              Message-ID: 
                              
                              Moshe Zadka 
                              
                              write: > As someone who works primarily in Perl nowadays, and hates > it, I must say > that as horrible and unaesthetic pod is, having > > perldoc package::module > > Just work is worth everything -- [snip] > We had a DevDay, we have a sig, we have a PEP. None of this > seems to help -- > what we need is a BDFL's pronouncement, even if it's on the > worst solution > possibly imaginable. ReportLab have just hired Dinu Gherman to work on this. We have crude running solutions of our own that do both HTML+Bitmap and PDF on any package, and are devoting considerable resources to an automatic documentation tool. In fact, it's part of a deliverable for a customer project this spring. We need both these PEPs or something like them for this to really fly. Dinu will be at IPC9 and happy to discuss this, and we have the resources to do trial implementations for the BDFL to consider. I suggest anyone interested contacts Dinu at the address above. And Dinu, why don't you contact the doc-sig administrator and find out why your membership is blocked :-) - Andy Robinson From mwh21 at cam.ac.uk Thu Feb 15 15:45:18 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 15 Feb 2001 14:45:18 +0000 Subject: [Python-Dev] python-dev summaries? In-Reply-To: "Tim Peters"'s message of "Thu, 15 Feb 2001 03:00:49 -0500" References: 
                              
                              Message-ID: 
                              
                              "Tim Peters" 
                              
                              writes: > [Michael Hudson, graduates from bytecodes to ASCII art] > > ... > > If noone else is planning on doing a sumamry, I'll post a draft for > > comments sometime tomorrow. > > 1. If you solicit comments, it will be 3 months of debate before > you get to post the thing <0.8 wink>. Just Do It. Well, I'm not quite brave enough for that. Here's what I've written; spelling & grammar flames appreciated! You've got a couple of hours before I post it to all the other places... It is with some trepidation that I post: This is a summary of traffic on the python-dev mailing list between Feb 1 and Feb 14 2001. It is intended to inform the wider Python community of ongoing developments. To comment, just post to python-list at python.org or comp.lang.python in the usual way. Give your posting a meaningful subject line, and if it's about a PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep iteration) All python-dev members are interested in seeing ideas discussed by the community, so don't hesitate to take a stance on a PEP if you have an opinion. This is the first python-dev summary written by Michael Hudson. Previous summaries were written by Andrew Kuchling and can be found at: 
                               New summaries will probably appear at: 
                               When I get round to it. Posting distribution (with apologies to mbm) Number of articles in summary: 498 80 | ]|[ | ]|[ | ]|[ | ]|[ | ]|[ ]|[ 60 | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ 40 | ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 20 | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 0 +-029-067-039-037-080-048-020-009-040-021-008-030-043-027 Thu 01| Sat 03| Mon 05| Wed 07| Fri 09| Sun 11| Tue 13| Fri 02 Sun 04 Tue 06 Thu 08 Sat 10 Mon 12 Wed 14 A fairly busy fortnight on python-dev, falling just short of five hundred articles. Much of this is making ready for the Python 2.1 release, but people's horizons are beginning to rise above the present. * Python 2.1a2 * Python 2.1a2 was released on Feb. 2. One of the more controversial changes was the disallowing of "from module import *" at anything other than module level; this restriction was weakened after some slightly heated discussion on comp.lang.python. 
                              
                              It is possible that non-module-level "from module import *" will produce some kind of warning in Python 2.1 but this code has not yet been written. * Performance * Almost two weeks ago, we were talking about performance. Michael Hudson posted the results of an extended benchmarking session using Marc-Andre Lemburg's pybench suite: 
                              
                              to which the conclusion was that python 2.1 will be marginally slower than python 2.0, but it's not worth shouting about. The use of Vladimir Marangoz's obmalloc patch in some of the benchmarks sparked a discussion about whether this patch should be incorporated into Python 2.1. There was support from many for adding it on an opt-in basis, since when nothing has happened... * Imports on case-insensitive file systems * There was quite some discussion about how to handle imports on a case-insensitive file system (eg. on Windows). I didn't follow the details, but Tim Peters is on the case (sorry), so I'm confident it will get sorted out. * Sets & iterators * The Sets discussion rumbled on, moving into areas of syntax. The syntax: for key:value in dict: was proposed. Discussion went round and round for a while and moved on to more general iteration constructs, prompting Ka-Ping Yee to write a PEP entitled "iterators": 
                              
                              Please comment! Greg Wilson announced that BOFs for both sets and iterators have been arranged at the python9 conference in March: 
                              
                              * Stackless Python in Korea * Christian Tismer gave a presentation on stackless python to over 700 Korean pythonistas: 
                              
                              I think almost everyone was amazed and delighted to find that Python has such a fan base. Next stop, the world! * string methodizing the standard library * Eric Raymond clearly got bored one evening and marched through the standard library, converting almost all uses of the string module to use to equivalent string method. * Python's release schedule * Skip Montanero raised some concerns about Python's accelerated release schedule, and it was pointed out that the default Python for both debian unstable and Redhat 7.1 beta was still 1.5.2. Have *you* upgraded to Python 2.0? If not, why not? * Unit testing (again) * The question of replacing Python's hoary old regrtest-driven test suite with something more modern came up again. Andrew Kuchling enquired whether the issue was to be decided by voting or BDFL fiat: 
                              
                              Guido obliged: 
                              
                              There was then some discussion of what changes people would like to see made in the standard-Python-unit-testing-framework-elect (PyUnit) before they would be happy with it. Cheers, M. -- Or here's an even simpler indicator of how much C++ sucks: Print out the C++ Public Review Document. Have someone hold it about three feet above your head and then drop it. Thus you will be enlightened. -- Thant Tessman From akuchlin at cnri.reston.va.us Thu Feb 15 15:52:49 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Thu, 15 Feb 2001 09:52:49 -0500 Subject: [Python-Dev] python-dev summaries? In-Reply-To: 
                              
                              ; from mwh21@cam.ac.uk on Thu, Feb 15, 2001 at 02:45:18PM +0000 References: 
                              
                              
                              Message-ID: <20010215095248.A5827@thrak.cnri.reston.va.us> On Thu, Feb 15, 2001 at 02:45:18PM +0000, Michael Hudson wrote: > use to equivalent string method. > > * Python's release schedule * I think an extra blank line before the section headings would separate the sections more clearly. > Skip Montanero raised some concerns about Python's accelerated ^^^^^^^^^ Montanaro Beyond those two things, great work! I say post it. (Don't forget to send copies to lwn at lwn.net and editors at linuxtoday.com.) Also, is it OK with you if I begin adding these summaries to the archive at www.amk.ca/python/dev/, suitably credited? --amk From guido at digicool.com Thu Feb 15 15:51:53 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 09:51:53 -0500 Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: Your message of "Thu, 15 Feb 2001 04:07:38 EST." 
                              
                              References: 
                              
                              Message-ID: <200102151451.JAA29642@cj20424-a.reston1.va.home.com> > [Ka-Ping Yee] > > ... > > The only exceptions that don't currently conform, as far as i > > know, have to do with invalid literals. [Tim] > Pretty much, but nothing's *that* easy. > > Other examples: > > + If there are too many nested blocks, it raises SystemError(!). > > + MemoryError is raised if a dotted name is too long. > > + OverflowError is raised if a string is too long. > > Note that those don't have to do with syntax, they're arbitrary > implementation limits. So that's the rule: raise > > SystemError if something is bigger than 20 > MemoryError if it's bigger than 1000 > OverflowError if it's bigger than an int > > Couldn't be clearer 
                              
                              . > > + SystemErrors are raised in many other places in the role of internal > assertions failing. Those needn't be changed. Note that MemoryErrors are also raised whenever new objects are created, which happens all the time during the course of compilation (both Jeremy's symbol table code and of course code objects). These needn't be changed either. --Guido van Rossum (home page: http://www.python.org/~guido/) From mwh21 at cam.ac.uk Thu Feb 15 17:20:48 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 15 Feb 2001 16:20:48 +0000 Subject: [Python-Dev] python-dev summaries? In-Reply-To: Andrew Kuchling's message of "Thu, 15 Feb 2001 09:52:49 -0500" References: 
                              
                              
                              <20010215095248.A5827@thrak.cnri.reston.va.us> Message-ID: 
                              
                              Andrew Kuchling 
                              
                              writes: > On Thu, Feb 15, 2001 at 02:45:18PM +0000, Michael Hudson wrote: > > use to equivalent string method. > > > > * Python's release schedule * > > I think an extra blank line before the section headings would separate > the sections more clearly. > > > Skip Montanero raised some concerns about Python's accelerated > ^^^^^^^^^ Montanaro > > Beyond those two things, great work! I say post it. (Don't forget to > send copies to lwn at lwn.net and editors at linuxtoday.com.) Thanks! I meant to check Skip's name (duh! sorry!). Changes made. > Also, is it OK with you if I begin adding these summaries to the > archive at www.amk.ca/python/dev/, suitably credited? Yeah, sure. I was going to stick them on my pages, but it probably makes more sense to keep them where people already look for them. Do you want me to send you the html-ized version I've cobbled together? (and got to validate as xhtml 1.0 strict...). Cheers, M. -- 48. The best book on programming for the layman is "Alice in Wonderland"; but that's because it's the best book on anything for the layman. -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html From mwh21 at cam.ac.uk Thu Feb 15 17:55:35 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: Thu, 15 Feb 2001 16:55:35 +0000 (GMT) Subject: [Python-Dev] python-dev summary, 2001-02-01 - 2001-02-15 Message-ID: 
                              
                              It is with some trepidation that I post: This is a summary of traffic on the python-dev mailing list between Feb 1 and Feb 14 2001. It is intended to inform the wider Python community of ongoing developments. To comment, just post to python-list at python.org or comp.lang.python in the usual way. Give your posting a meaningful subject line, and if it's about a PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep iteration) All python-dev members are interested in seeing ideas discussed by the community, so don't hesitate to take a stance on a PEP if you have an opinion. This is the first python-dev summary written by Michael Hudson. Previous summaries were written by Andrew Kuchling and can be found at: 
                               New summaries will probably appear at: 
                               When I get round to it. Posting distribution (with apologies to mbm) Number of articles in summary: 498 80 | ]|[ | ]|[ | ]|[ | ]|[ | ]|[ ]|[ 60 | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ 40 | ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 20 | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 0 +-029-067-039-037-080-048-020-009-040-021-008-030-043-027 Thu 01| Sat 03| Mon 05| Wed 07| Fri 09| Sun 11| Tue 13| Fri 02 Sun 04 Tue 06 Thu 08 Sat 10 Mon 12 Wed 14 A fairly busy fortnight on python-dev, falling just short of five hundred articles. Much of this is making ready for the Python 2.1 release, but people's horizons are beginning to rise above the present. * Python 2.1a2 * Python 2.1a2 was released on Feb. 2. One of the more controversial changes was the disallowing of "from module import *" at anything other than module level; this restriction was weakened after some slightly heated discussion on comp.lang.python. 
                              
                              It is possible that non-module-level "from module import *" will produce some kind of warning in Python 2.1 but this code has not yet been written. * Performance * Almost two weeks ago, we were talking about performance. Michael Hudson posted the results of an extended benchmarking session using Marc-Andre Lemburg's pybench suite: 
                              
                              to which the conclusion was that python 2.1 will be marginally slower than python 2.0, but it's not worth shouting about. The use of Vladimir Marangoz's obmalloc patch in some of the benchmarks sparked a discussion about whether this patch should be incorporated into Python 2.1. There was support from many for adding it on an opt-in basis, since when nothing has happened... * Imports on case-insensitive file systems * There was quite some discussion about how to handle imports on a case-insensitive file system (eg. on Windows). I didn't follow the details, but Tim Peters is on the case (sorry), so I'm confident it will get sorted out. * Sets & iterators * The Sets discussion rumbled on, moving into areas of syntax. The syntax: for key:value in dict: was proposed. Discussion went round and round for a while and moved on to more general iteration constructs, prompting Ka-Ping Yee to write a PEP entitled "iterators": 
                              
                              Please comment! Greg Wilson announced that BOFs for both sets and iterators have been arranged at the python9 conference in March: 
                              
                              * Stackless Python in Korea * Christian Tismer gave a presentation on stackless python to over 700 Korean pythonistas: 
                              
                              I think almost everyone was amazed and delighted to find that Python has such a fan base. Next stop, the world! * string methodizing the standard library * Eric Raymond clearly got bored one evening and marched through the standard library, converting almost all uses of the string module to use to equivalent string method. * Python's release schedule * Skip Montanaro raised some concerns about Python's accelerated release schedule, and it was pointed out that the default Python for both debian unstable and Redhat 7.1 beta was still 1.5.2. Have *you* upgraded to Python 2.0? If not, why not? * Unit testing (again) * The question of replacing Python's hoary old regrtest-driven test suite with something more modern came up again. Andrew Kuchling enquired whether the issue was to be decided by voting or BDFL fiat: 
                              
                              Guido obliged: 
                              
                              There was then some discussion of what changes people would like to see made in the standard-Python-unit-testing-framework-elect (PyUnit) before they would be happy with it. Cheers, M. From moshez at zadka.site.co.il Thu Feb 15 19:15:32 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Thu, 15 Feb 2001 20:15:32 +0200 (IST) Subject: [Python-Dev] Documentation Tools (was Unit Testing) In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <20010215181532.C7D2AA840@darjeeling.zadka.site.co.il> On Thu, 15 Feb 2001 10:07:11 -0000, "Andy Robinson" 
                              
                              wrote: > We need both these PEPs or something like them for this > to really fly. If Dinu wants to take over the PEP, it's fine by me. If Dinu wants me to keep the PEP, I'll be happy to work with him. > Dinu will be at IPC9 and happy to discuss > this Happy to talk to him, but *please* don't make it into a DevDay/BoF/something formal. We had one at IPC8, which merely served to waste time. Again, I reiterate my opinion: there will never be a consensus in doc-sig. It doesn't matter -- a horrible standard format is better then what we have today. -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez at debian.org |http://www.{python,debian,gnu}.org From ping at lfw.org Thu Feb 15 20:36:10 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 15 Feb 2001 11:36:10 -0800 (PST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010214164717.24AA1A840@darjeeling.zadka.site.co.il> Message-ID: 
                              
                              On Wed, 14 Feb 2001, Moshe Zadka wrote: > As someone who works primarily in Perl nowadays, and hates it, I must say > that as horrible and unaesthetic pod is, having > > perldoc package::module > > Just work is worth everything -- I've marked everything I wrote that way, > and I can't begin to explain how much it helps. I agree that this is important. > We had a DevDay, we have a sig, we have a PEP. None of this seems to help -- What are you talking about? There is an implementation and it works. I demonstrated the HTML one back at Python 8, and now there is a text-generating one in the CVS tree. -- ?!ng From mal at lemburg.com Thu Feb 15 23:20:45 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 15 Feb 2001 23:20:45 +0100 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) References: 
                              
                              Message-ID: <3A8C563D.D9BB6E3E@lemburg.com> Michael Hudson wrote: > > The use > of Vladimir Marangoz's obmalloc patch in some of the benchmarks > sparked a discussion about whether this patch should be incorporated > into Python 2.1. There was support from many for adding it on an > opt-in basis, since when nothing has happened... ... I'm still waiting on BDFL pronouncement on this one. The plan was to check it in for beta1 on an opt-in basis (Vladimir has written the patch this way). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From fredrik at effbot.org Thu Feb 15 23:40:03 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 15 Feb 2001 23:40:03 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) References: 
                              
                              Message-ID: <000801c097a0$41397520$e46940d5@hagrid> can anyone explain why it's a good idea to have totally incomprehensible stuff like __all__ = locals().keys() for _i in range(len(__all__)-1,-1,-1): if __all__[_i][0] == "_": del __all__[_i] del _i in my code? Annoyed /F From skip at mojam.com Fri Feb 16 00:13:09 2001 From: skip at mojam.com (Skip Montanaro) Date: Thu, 15 Feb 2001 17:13:09 -0600 (CST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <000801c097a0$41397520$e46940d5@hagrid> References: 
                              
                              <000801c097a0$41397520$e46940d5@hagrid> Message-ID: <14988.25221.294028.413733@beluga.mojam.com> Fredrik> can anyone explain why it's a good idea to have totally Fredrik> incomprehensible stuff like Fredrik> __all__ = locals().keys() Fredrik> for _i in range(len(__all__)-1,-1,-1): Fredrik> if __all__[_i][0] == "_": Fredrik> del __all__[_i] Fredrik> del _i Fredrik> in my code? Please don't shoot the messenger... ;-) In modules that looked to me to contain nothing by constants, I used the above technique to simply load all the modules symbols into __all__, then delete any that began with an underscore. If there is no reason to have an __all__ list for such modules, feel free to remove the code, just remember to also delete the check_all() call in Lib/test/test___all__.py. Skip From guido at digicool.com Fri Feb 16 00:28:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 18:28:03 -0500 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: Your message of "Thu, 15 Feb 2001 23:20:45 +0100." <3A8C563D.D9BB6E3E@lemburg.com> References: 
                              
                              <3A8C563D.D9BB6E3E@lemburg.com> Message-ID: <200102152328.SAA32032@cj20424-a.reston1.va.home.com> > Michael Hudson wrote: > > > > The use > > of Vladimir Marangoz's obmalloc patch in some of the benchmarks > > sparked a discussion about whether this patch should be incorporated > > into Python 2.1. There was support from many for adding it on an > > opt-in basis, since when nothing has happened... > > ... I'm still waiting on BDFL pronouncement on this one. The plan > was to check it in for beta1 on an opt-in basis (Vladimir has written > the patch this way). > > -- > Marc-Andre Lemburg If it is truly opt-in (supposedly a configure option?), I'm all for it. I recall vaguely though that Jeremy or Tim thought that the patch touches lots of code even when one doesn't opt in. That was a no-no so close before the a2 release. Anybody who actually looked at the code got an opinion on that now? The b1 release is planned for March 1st, or exactly two weeks! --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Fri Feb 16 00:34:31 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 15 Feb 2001 18:34:31 -0500 (EST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <14988.25221.294028.413733@beluga.mojam.com> References: 
                              
                              <000801c097a0$41397520$e46940d5@hagrid> <14988.25221.294028.413733@beluga.mojam.com> Message-ID: <14988.26503.13571.878316@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro 
                              
                              writes: Fredrik> can anyone explain why it's a good idea to have totally Fredrik> incomprehensible stuff like Fredrik> __all__ = locals().keys() for _i in Fredrik> range(len(__all__)-1,-1,-1): if __all__[_i][0] == "_": del Fredrik> __all__[_i] del _i Fredrik> in my code? SM> Please don't shoot the messenger... ;-) SM> In modules that looked to me to contain nothing by constants, I SM> used the above technique to simply load all the modules symbols SM> into __all__, then delete any that began with an underscore. If SM> there is no reason to have an __all__ list for such modules, SM> feel free to remove the code, just remember to also delete the SM> check_all() call in Lib/test/test___all__.py. If __all__ is needed (still not sure what it's for :-), wouldn't the following one-liner be clearer: __all__ = [name for name in locals.keys() if not name.startswith('_')] Jeremy From guido at digicool.com Fri Feb 16 00:38:04 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 18:38:04 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: Your message of "Thu, 15 Feb 2001 23:40:03 +0100." <000801c097a0$41397520$e46940d5@hagrid> References: 
                              
                              <000801c097a0$41397520$e46940d5@hagrid> Message-ID: <200102152338.SAA32099@cj20424-a.reston1.va.home.com> > can anyone explain why it's a good idea to have totally > incomprehensible stuff like > > __all__ = locals().keys() > for _i in range(len(__all__)-1,-1,-1): > if __all__[_i][0] == "_": > del __all__[_i] > del _i > > in my code? Ask Skip. :-) This doesn't exclude anything that would be included in import* by default, so I'm not sure I see the point either. As for clarity, it would've been nice if there was a comment. If it is decided that it's a good idea to have __all__ even when it doesn't add any new information (I'm not so sure), here's a cleaner way to spell it, which also gets the names in alphabetical order: # Set __all__ to the list of global names not starting with underscore: __all__ = filter(lambda s: s[0]!='_', dir()) --Guido van Rossum (home page: http://www.python.org/~guido/) From mwh21 at cam.ac.uk Fri Feb 16 00:40:49 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 15 Feb 2001 23:40:49 +0000 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: Guido van Rossum's message of "Thu, 15 Feb 2001 18:28:03 -0500" References: 
                              
                              <3A8C563D.D9BB6E3E@lemburg.com> <200102152328.SAA32032@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              Guido van Rossum 
                              
                              writes: > > Michael Hudson wrote: > > > > > > The use > > > of Vladimir Marangoz's obmalloc patch in some of the benchmarks > > > sparked a discussion about whether this patch should be incorporated > > > into Python 2.1. There was support from many for adding it on an > > > opt-in basis, since when nothing has happened... > > > > ... I'm still waiting on BDFL pronouncement on this one. The plan > > was to check it in for beta1 on an opt-in basis (Vladimir has written > > the patch this way). > > > > -- > > Marc-Andre Lemburg > > If it is truly opt-in (supposedly a configure option?), I'm all for > it. It is very much opt-in. > I recall vaguely though that Jeremy or Tim thought that the patch > touches lots of code even when one doesn't opt in. That was a no-no > so close before the a2 release. Anybody who actually looked at the > code got an opinion on that now? I suggest looking at the patch. Not at the code, but what it does as a diff: 1) Add a file Objects/obmalloc.c 2) Add stuff to configure.in & config.h to detect the --with-pymalloc argument to ./configure 3) Conditionally #include "obmalloc.h" in Objects/object.c if WITH_PYMALLOC is #defined 4) Conditionally #define the variables in Include/objimpl.h to #define the #defines needed to override the memory imiplementation if WITH_PYMALLOC is #defined And *that's it*. That's not my definition of "touches a lot of code". Cheers, M. -- Or here's an even simpler indicator of how much C++ sucks: Print out the C++ Public Review Document. Have someone hold it about three feet above your head and then drop it. Thus you will be enlightened. -- Thant Tessman From greg at cosc.canterbury.ac.nz Fri Feb 16 00:41:53 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Fri, 16 Feb 2001 12:41:53 +1300 (NZDT) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <000801c097a0$41397520$e46940d5@hagrid> Message-ID: <200102152341.MAA06568@s454.cosc.canterbury.ac.nz> Fredrik Lundh 
                              
                              : > for _i in range(len(__all__)-1,-1,-1): On a slightly wider topic, it might be nice to have a clearer way of iterating backwards over a range. How about a function such as revrange(n1, n2) which would produce the same sequence of numbers as range(n1, n2) but in the opposite order. (Plus corresponding xrevrange() of course.) Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From guido at digicool.com Fri Feb 16 00:45:54 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 18:45:54 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: Your message of "Thu, 15 Feb 2001 17:13:09 CST." <14988.25221.294028.413733@beluga.mojam.com> References: 
                              
                              <000801c097a0$41397520$e46940d5@hagrid> <14988.25221.294028.413733@beluga.mojam.com> Message-ID: <200102152345.SAA32204@cj20424-a.reston1.va.home.com> > Fredrik> can anyone explain why it's a good idea to have totally > Fredrik> incomprehensible stuff like > > Fredrik> __all__ = locals().keys() > Fredrik> for _i in range(len(__all__)-1,-1,-1): > Fredrik> if __all__[_i][0] == "_": > Fredrik> del __all__[_i] > Fredrik> del _i > > Fredrik> in my code? > > Please don't shoot the messenger... ;-) I'm not sure you qualify as the messenger, Skip. You seem to be taking this __all__ thing way beyond where I thought it needed to go. > In modules that looked to me to contain nothing by constants, I used the > above technique to simply load all the modules symbols into __all__, then > delete any that began with an underscore. If there is no reason to have an > __all__ list for such modules, feel free to remove the code, just remember > to also delete the check_all() call in Lib/test/test___all__.py. Rhetorical question: why do we have __all__? In my mind we have it so that "from M import *" doesn't import spurious stuff that happens to be a global in M but isn't really intended for export from M. Typical example: Tkinter is commonly used in "from Tkinter import *" mode, but accidentally exports a few standard modules like sys. Adding __all__ just for the sake of having __all__ defined doesn't seem to me a good use of anybody's time; since "from M import *" already skips names starting with '_', there's no reason to have __all__ defined in modules where it is computed to be exactly the globals that don't start with '_'... Also, it's not immediately clear what test___all__.py tests. It seems that it just checks that the __all__ attribute exists and then that "from M import *" imports exactly the names in __all__. Since that's how it's implemented, what does this really test? I guess it tests that the import mechanism doesn't screw up. It could screw up if it was replaced by a custom import hack that hasn't been taught to look for __all__ yet, for example, and it's useful if this is caught. But why do we need to import every module under the sun that happens to define __all__ to check that? --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Fri Feb 16 00:48:01 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 18:48:01 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: Your message of "Thu, 15 Feb 2001 18:34:31 EST." <14988.26503.13571.878316@w221.z064000254.bwi-md.dsl.cnc.net> References: 
                              
                              <000801c097a0$41397520$e46940d5@hagrid> <14988.25221.294028.413733@beluga.mojam.com> <14988.26503.13571.878316@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102152348.SAA32223@cj20424-a.reston1.va.home.com> > If __all__ is needed (still not sure what it's for :-), wouldn't the > following one-liner be clearer: > > __all__ = [name for name in locals.keys() if not name.startswith('_')] But that shouldn't be used in /F's modules, because he wants them to be 1.5 compatible. Anyway, filter(lambda s: s[0]!='_', dir()) is shorter, and you prove that it isn't faster. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Fri Feb 16 00:53:46 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 15 Feb 2001 18:53:46 -0500 (EST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <200102152348.SAA32223@cj20424-a.reston1.va.home.com> References: 
                              
                              <000801c097a0$41397520$e46940d5@hagrid> <14988.25221.294028.413733@beluga.mojam.com> <14988.26503.13571.878316@w221.z064000254.bwi-md.dsl.cnc.net> <200102152348.SAA32223@cj20424-a.reston1.va.home.com> Message-ID: <14988.27658.989073.771498@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum 
                              
                              writes: >> If __all__ is needed (still not sure what it's for :-), wouldn't >> the following one-liner be clearer: >> >> __all__ = [name for name in locals.keys() if not >> name.startswith('_')] GvR> But that shouldn't be used in /F's modules, because he wants GvR> them to be 1.5 compatible. Anyway, filter(lambda s: s[0]!='_', GvR> dir()) is shorter, and you prove that it isn't faster. :-) Well, if he wants it to work with 1.5.2, that's one thing. But the list comprehensions is clear are short done your way: __all__ = [s for s in dir() if s[0] != '_'] Jeremy From guido at digicool.com Fri Feb 16 00:54:12 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 18:54:12 -0500 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: Your message of "15 Feb 2001 23:40:49 GMT." 
                              
                              References: 
                              
                              <3A8C563D.D9BB6E3E@lemburg.com> <200102152328.SAA32032@cj20424-a.reston1.va.home.com> 
                              
                              Message-ID: <200102152354.SAA32281@cj20424-a.reston1.va.home.com> > > If it is truly opt-in (supposedly a configure option?), I'm all for > > it. > > It is very much opt-in. > > > I recall vaguely though that Jeremy or Tim thought that the patch > > touches lots of code even when one doesn't opt in. That was a no-no > > so close before the a2 release. Anybody who actually looked at the > > code got an opinion on that now? > > I suggest looking at the patch. Not at the code, but what it does as > a diff: > > 1) Add a file Objects/obmalloc.c > 2) Add stuff to configure.in & config.h to detect the --with-pymalloc > argument to ./configure > 3) Conditionally #include "obmalloc.h" in Objects/object.c if > WITH_PYMALLOC is #defined > 4) Conditionally #define the variables in Include/objimpl.h to #define > the #defines needed to override the memory imiplementation if > WITH_PYMALLOC is #defined > > And *that's it*. That's not my definition of "touches a lot of code". OK, I just looked, and I agree. BTW, for those who want to look, the URL is: http://sourceforge.net/patch/?func=detailpatch&patch_id=101104&group_id=5470 This is currently assigned to Barry. Barry, can you see if this is truly fit for inclusion? Or am I missing something? Note that there's a companion patch that adds a memory profiler: http://sourceforge.net/patch/?func=detailpatch&patch_id=101229&group_id=5470 Should this also be applied? Is there a reason why it shouldn't? --Guido van Rossum (home page: http://www.python.org/~guido/) From tim_one at email.msn.com Fri Feb 16 01:04:32 2001 From: tim_one at email.msn.com (Tim Peters) Date: Thu, 15 Feb 2001 19:04:32 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <000801c097a0$41397520$e46940d5@hagrid> Message-ID: 
                              
                              [/F] > can anyone explain why it's a good idea to have totally > incomprehensible stuff like > > __all__ = locals().keys() > for _i in range(len(__all__)-1,-1,-1): > if __all__[_i][0] == "_": > del __all__[_i] > del _i > > in my code? I'm unclear on why __all__ was introduced, but if it's gonna be there I'd suggest: __all__ = [k for k in dir() if k[0] not in "_["] del k If anyone was exporting the name "k", they should be shot anyway 
                              
                              . Oh, ya, "[" has to be excluded because the listcomp itself temporarily creates an artificial name beginning with "[". >>> [k for k in dir()] ['[1]', '__builtins__', '__doc__', '__name__'] ^^^^^ >>> dir() # but now it's gone ['__builtins__', '__doc__', '__name__', 'k'] >>> From guido at digicool.com Fri Feb 16 01:12:33 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 19:12:33 -0500 Subject: [Python-Dev] Re: whitespace normalization In-Reply-To: Your message of "Thu, 15 Feb 2001 15:56:41 PST." 
                              
                              References: 
                              
                              Message-ID: <200102160012.TAA32395@cj20424-a.reston1.va.home.com> Tim, I've seen a couple of checkins lately from you like this: > Modified Files: > random.py robotparser.py > Log Message: > Whitespace normalization. Apparently you watch checkins to the std library and run reindent on changed modules occasionally. Would it make sense to check in a test case into the test suite that verifies that all std modules are reindent fixpoints, so that whoever changes a module gets a chance to catch this before they check in? --Guido van Rossum (home page: http://www.python.org/~guido/) From tim_one at email.msn.com Fri Feb 16 01:25:26 2001 From: tim_one at email.msn.com (Tim Peters) Date: Thu, 15 Feb 2001 19:25:26 -0500 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: <200102152328.SAA32032@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Tim] > If it is truly opt-in (supposedly a configure option?), I'm all for > it. I recall vaguely though that Jeremy or Tim thought that the patch > touches lots of code even when one doesn't opt in. Nope, not us. The patch is utterly harmless if not enabled, but dangerous if enabled (because it doesn't implement any critical sections -- see gobs of pre-release email about that). From tim_one at email.msn.com Fri Feb 16 01:38:00 2001 From: tim_one at email.msn.com (Tim Peters) Date: Thu, 15 Feb 2001 19:38:00 -0500 Subject: [Python-Dev] Re: whitespace normalization In-Reply-To: <200102160012.TAA32395@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              Your @Home email is working?! I'm back on MSN. @Home is up, but times out on almost everything for me. > I've seen a couple of checkins lately from you like this: > > > Modified Files: > > random.py robotparser.py > > Log Message: > > Whitespace normalization. > > Apparently you watch checkins to the std library and run reindent on > changed modules occasionally. I run reindent on *all* std Library modules once or twice a week: if a file is a reindent fixed-point, reindent leaves it entirely alone, so no spurious checkins are generated. That is, reindent saves "before" and "after" versions of the entire module in memory, and doesn't even write a new file if before == after. > Would it make sense to check in a test case into the test suite that > verifies that all std modules are reindent fixpoints, so that whoever > changes a module gets a chance to catch this before they check in? Don't think it's worth the bother: running reindent over everything in Lib/ takes well over 10 seconds on my 866MHz box, so it would end up getting skipped by people anway. More suitable for an infrequent cron job, yes? From tim_one at email.msn.com Fri Feb 16 01:44:53 2001 From: tim_one at email.msn.com (Tim Peters) Date: Thu, 15 Feb 2001 19:44:53 -0500 Subject: [Python-Dev] Re: whitespace normalization In-Reply-To: <200102160012.TAA32395@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              > I've seen a couple of checkins lately from you like this: > > > Modified Files: > > random.py robotparser.py > > Log Message: > > Whitespace normalization. > > Apparently you watch checkins to the std library and run reindent on > changed modules occasionally. I run reindent on *all* std Library modules once or twice a week: if a file is a reindent fixed-point, reindent leaves it entirely alone, so no spurious checkins are generated. That is, reindent saves "before" and "after" versions of the entire module in memory, and doesn't even write a new file if before == after. > Would it make sense to check in a test case into the test suite that > verifies that all std modules are reindent fixpoints, so that whoever > changes a module gets a chance to catch this before they check in? Don't think it's worth the bother: running reindent over everything in Lib/ takes well over 10 seconds on my 866MHz box, so it would end up getting skipped by people anway. More suitable for an infrequent cron job, yes? BTW, there are still many Python files in the std distribution that haven't been run thru reindent yet. For example, I'm uncomfortable doing anything in Lib/plat-irix6, etc: don't have the platform, and no test suite anyway. Put out a call for others to clean up directories they care about, but nobody bit. From skip at mojam.com Fri Feb 16 02:05:49 2001 From: skip at mojam.com (Skip Montanaro) Date: Thu, 15 Feb 2001 19:05:49 -0600 (CST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <200102152345.SAA32204@cj20424-a.reston1.va.home.com> References: 
                              
                              <000801c097a0$41397520$e46940d5@hagrid> <14988.25221.294028.413733@beluga.mojam.com> <200102152345.SAA32204@cj20424-a.reston1.va.home.com> Message-ID: <14988.31981.365476.245762@beluga.mojam.com> Guido> Adding __all__ just for the sake of having __all__ defined Guido> doesn't seem to me a good use of anybody's time; since "from M Guido> import *" already skips names starting with '_', there's no Guido> reason to have __all__ defined in modules where it is computed to Guido> be exactly the globals that don't start with '_'... Sounds fine by me. I'll remove it from any modules like sre_constants that don't import anything else. Guido> Also, it's not immediately clear what test___all__.py tests. hmmm... There was a reason. If I think about it long enough I may actually remember what it was. I definitely needed it for the first few modules to make sure I was doing things right. I eventually got into this mechanical mode of adding __all__ lists, then adding a check_all call to the test___all__ module. In cases where I didn't construct __all__ correctly (say, somehow wound up with two copies of "xyz" in the list) it caught that. Okay, so I'm back to the drawing board on this. The rationale for defining __all__ is to prevent namespace pollution when someone executes an import *. I guess definition of __all__ should be restricted to modules that import other modules and don't explictly take other pains to clean up their namespace. I suspect test___all__.py could/should be removed as well. Skip From skip at mojam.com Fri Feb 16 02:10:37 2001 From: skip at mojam.com (Skip Montanaro) Date: Thu, 15 Feb 2001 19:10:37 -0600 (CST) Subject: [Python-Dev] Re: whitespace normalization In-Reply-To: 
                              
                              References: <200102160012.TAA32395@cj20424-a.reston1.va.home.com> 
                              
                              Message-ID: <14988.32269.199812.169538@beluga.mojam.com> Tim> Don't think it's worth the bother: running reindent over everything Tim> in Lib/ takes well over 10 seconds on my 866MHz box, so it would Tim> end up getting skipped by people anway. More suitable for an Tim> infrequent cron job, yes? On Unix at least, you could simply eliminate it from the quicktest target to speed up most test runs. Dunno how you'd avoid executing it on other platforms. S From barry at digicool.com Fri Feb 16 05:12:04 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Thu, 15 Feb 2001 23:12:04 -0500 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) References: 
                              
                              <3A8C563D.D9BB6E3E@lemburg.com> <200102152328.SAA32032@cj20424-a.reston1.va.home.com> 
                              
                              <200102152354.SAA32281@cj20424-a.reston1.va.home.com> Message-ID: <14988.43156.191949.342241@anthem.wooz.org> >>>>> "GvR" == Guido van Rossum 
                              
                              writes: GvR> This is currently assigned to Barry. Barry, can you see if GvR> this is truly fit for inclusion? Or am I missing something? I think I was wary of applying it without the chance to run it through Insure when it was enabled. I can put that on my list of things to do for beta1. -Barry From tim.one at home.com Fri Feb 16 06:59:42 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 00:59:42 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Moshe Zadka] > We had a DevDay, we have a sig, we have a PEP. None of this > seems to help -- [Ka-Ping Yee] > What are you talking about? There is an implementation and it works. There are many implementations "that work". But we haven't picked one. What's the standard markup for Python docstrings? There isn't! That's what he's talking about. This is especially bizarre because it's been clear for *years* that some variant of structured text would win in the end, but nobody playing the game likes all the details of anyone else's set of (IMO, all overly elaborate) conventions, so the situation for users is no better now than it was the day docstrings were added. Tibs's latest (and ongoing) attempt to reach a consensus can be found here: http://www.tibsnjoan.demon.co.uk/docutils/STpy.html The status of its implementation here: http://www.tibsnjoan.demon.co.uk/docutils/status.html Not close yet. In the meantime, Perlers have been "suffering" with a POD spec about 3% the size of the proposed Python spec; I guess their only consolation is that POD docs have been in universal use for years 
                              
                              . while-ours-is-that-we'll-get-to-specify-non-breaking-spaces-someday- despite-that-not-1-doc-in-100-needs-them-ly y'rs - tim From tim.one at home.com Fri Feb 16 07:34:38 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 01:34:38 -0500 Subject: [Python-Dev] Documentation Tools (was Unit Testing) In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Andy Robinson] > ... > And Dinu, why don't you contact the doc-sig > administrator and find out why your membership is > blocked :-) That's Fred Drake, who I've copied on this. Dinu and Fred should talk directly if there's a problem. Membership in the doc-sig is open, and Fred couldn't block it even if he wanted to. http://mail.python.org/mailman/listinfo/doc-sig/ if-that-doesn't-work-there's-a-barry-bug-ly y'rs - tim PS: according to http://mail.python.org/mailman/roster/doc-sig Dinu is already a member. From ping at lfw.org Fri Feb 16 07:30:59 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 15 Feb 2001 22:30:59 -0800 (PST) Subject: [Python-Dev] Documentation Tools In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Fri, 16 Feb 2001, Tim Peters wrote: > [Moshe Zadka] > > We had a DevDay, we have a sig, we have a PEP. None of this > > seems to help -- > > [Ka-Ping Yee] > > What are you talking about? There is an implementation and it works. > > There are many implementations "that work". But we haven't picked one. > What's the standard markup for Python docstrings? There isn't! That's what > he's talking about. That's exactly the point i'm trying to make. There isn't any markup format enforced by pydoc, precisely because it isn't worth the strife. Moshe seemed to imply that the set of deployable documentation tools was empty, and i take issue with that. His post also had an tone of hopelessness about the topic that i wanted to counter immediately. The fact that pydoc doesn't have a way to italicize doesn't make it a non-solution -- it's a perfectly acceptable solution! Fancy formatting features can come later. > This is especially bizarre because it's been clear for *years* that some > variant of structured text would win in the end, but nobody playing the game > likes all the details of anyone else's set of (IMO, all overly elaborate) > conventions, so the situation for users is no better now than it was the day > docstrings were added. > > Tibs's latest (and ongoing) attempt to reach a consensus can be found here: > > http://www.tibsnjoan.demon.co.uk/docutils/STpy.html > > The status of its implementation here: > > http://www.tibsnjoan.demon.co.uk/docutils/status.html > > Not close yet. The design and implementation of a standard structured text syntax is emphatically *not* a prerequisite for a useful documentation tool. I agree that it may be nice, and i certainly applaud Tony's efforts, but we should not wait for it. -- ?!ng From barry at digicool.com Fri Feb 16 07:40:34 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 16 Feb 2001 01:40:34 -0500 Subject: [Python-Dev] Documentation Tools (was Unit Testing) References: 
                              
                              
                              Message-ID: <14988.52067.135016.782124@anthem.wooz.org> >>>>> "TP" == Tim Peters 
                              
                              writes: TP> if-that-doesn't-work-there's-a-barry-bug-ly y'rs - tim so-you-should-bug-barry-ly y'rs, -Barry From tim.one at home.com Fri Feb 16 09:05:10 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 03:05:10 -0500 Subject: [Python-Dev] Windows/Cygwin/MacOSX import (was RE: python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Michael Hudson] > ... > * Imports on case-insensitive file systems * > > There was quite some discussion about how to handle imports on a > case-insensitive file system (eg. on Windows). I didn't follow the > details, but Tim Peters is on the case (sorry), so I'm confident it > will get sorted out. You can be sure the whitespace will be consistent, anyway 
                              
                              . OK, this one sucks. It should really have gotten a PEP, but it cropped up too late in the release cycle and it can't be delayed (see below). Here's the scoop: file systems vary across platforms in whether or not they preserve the case of filenames, and in whether or not the platform C library file-opening functions do or don't insist on case-sensitive matches: case-preserving case-destroying +-------------------+------------------+ case-sensitive | most Unix flavors | brrrrrrrrrr | +-------------------+------------------+ case-insensitive | Windows | some unfortunate | | MacOSX HFS+ | network schemes | | Cygwin | | +-------------------+------------------+ In the upper left box, if you create "fiLe" it's stored as "fiLe", and only open("fiLe") will open it (open("file") will not, nor will the 14 other variations on that theme). In the lower right box, if you create "fiLe", there's no telling what it's stored as-- but most likely as "FILE" --and any of the 16 obvious variations on open("FilE") will open it. The lower left box is a mix: creating "fiLe" stores "fiLe" in the platform directory, but you don't have to match case when opening it; any of the 16 obvious variations on open("FILe") work. NONE OF THAT IS CHANGING! Python will continue to follow platform conventions wrt whether case is preserved when creating a file, and wrt whether open() requires a case-sensitive match. In practice, you should always code as if matches were case-sensitive, else your program won't be portable. But then you should also always open binary files with the "b" flag, and you don't do that either 
                              
                              . What's proposed is to change the semantics of Python "import" statements, and there *only* in the lower left box. Support for MaxOSX HFS+, and for Cygwin, is new in 2.1, so nothing is changing there. What's changing is Windows behavior. Here are the current rules for import on Windows: 1. Despite that the filesystem is case-insensitive, Python insists on a case-sensitive match. But not in the way the upper left box works: if you have two files, FiLe.py and file.py on sys.path, and do import file then if Python finds FiLe.py first, it raises a NameError. It does *not* go on to find file.py; indeed, it's impossible to import any but the first case-insensitive match on sys.path, and then only if case matches exactly in the first case-insensitive match. 2. An ugly exception: if the first case-insensitive match on sys.path is for a file whose name is entirely in upper case (FILE.PY or FILE.PYC or FILE.PYO), then the import silently grabs that, no matter what mixture of case was used in the import statement. This is apparently to cater to miserable old filesystems that really fit in the lower right box. But this exception is unique to Windows, for reasons that may or may not exist 
                              
                              . 3. And another exception: if the envar PYTHONCASEOK exists, Python silently grabs the first case-insensitive match of any kind. So these Windows rules are pretty complicated, and neither match the Unix rules nor provide semantics natural for the native filesystem. That makes them hard to explain to Unix *or* Windows users. Nevertheless, they've worked fine for years, and in isolation there's no compelling reason to change them. However, that was before the MacOSX HFS+ and Cygwin ports arrived. They also have case-preserving case-insensitive filesystems, but the people doing the ports despised the Windows rules. Indeed, a patch to make HFS+ act like Unix for imports got past a reviewer and into the code base, which incidentally made Cygwin also act like Unix (but this met the unbounded approval of the Cygwin folks, so they sure didn't complain -- they had patches of their own pending to do this, but the reviewer for those balked). At a higher level, we want to keep Python consistent, and I in particular want Python to do the same thing on *all* platforms with case-preserving case-insensitive filesystems. Guido too, but he's so sick of this argument don't ask him to confirm that <0.9 wink>. The proposed new semantics for the lower left box: A. If the PYTHONCASEOK envar exists, same as before: silently accept the first case-insensitive match of any kind; raise ImportError if none found. B. Else search sys.path for the first case-sensitive match; raise ImportError if none found. #B is the same rule as is used on Unix, so this will improve cross-platform portability. That's good. #B is also the rule the Mac and Cygwin folks want (and wanted enough to implement themselves, multiple times, which is a powerful argument in PythonLand). It can't cause any existing non-exceptional Windows import to fail, because any existing non-exceptional Windows import finds a case-sensitive match first in the path -- and it still will. An exceptional Windows import currently blows up with a NameError or ImportError, in which latter case it still will, or in which former case will continue searching, and either succeed or blow up with an ImportError. #A is needed to cater to case-destroying filesystems mounted on Windows, and *may* also be used by people so enamored of "natural" Windows behavior that they're willing to set an envar to get it. That's their problem 
                              
                              . I don't intend to implement #A for Unix too, but that's just because I'm not clear on how I *could* do so efficiently (I'm not going to slow imports under Unix just for theoretical purity). The potential damage is here: #2 (matching on ALLCAPS.PY) is proposed to be dropped. Case-destroying filesystems are a vanishing breed, and support for them is ugly. We're already supporting (and will continue to support) PYTHONCASEOK for their benefit, but they don't deserve multiple hacks in 2001. Flame at will. or-flame-at-tim-your-choice-ly y'rs - tim From martin at loewis.home.cs.tu-berlin.de Fri Feb 16 09:07:55 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Fri, 16 Feb 2001 09:07:55 +0100 Subject: [Python-Dev] threads and gethostbyname Message-ID: <200102160807.f1G87tG01454@mira.informatik.hu-berlin.de> > Under 1.5.2, this worked without a hitch. However, under 2.0, the > same program tends to lock up on some computers. I'm not 100% sure > (it's a bit hard to debug), but it looks like a global lock > problem... > Any ideas? Is this supposed to work at all? Can you post a short snippet demonstrating how exactly you initiate the DNS lookup, and how exactly you get the result back? I think it ought to work, and I'm not aware of a change that could cause it to break in 2.0. So far, in all cases where people reported "Tkinter and threading deadlocks", it turned out that the deadlock was in the application. Regards, Martin From tim.one at home.com Fri Feb 16 09:16:12 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 03:16:12 -0500 Subject: [Python-Dev] Documentation Tools In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Ka-Ping Yee] > That's exactly the point i'm trying to make. There isn't any markup > format enforced by pydoc, precisely because it isn't worth the strife. > Moshe seemed to imply that the set of deployable documentation tools > was empty, and i take issue with that. His post also had an tone of > hopelessness about the topic that i wanted to counter immediately. Most programmers are followers in this matter, and I agree with Moshe on this point: until something is Officially Blessed, Python programmers will stay away from every gimmick in unbounded droves. I personally don't care whether markup is ever defined, because I already gave up on it. But I-- like you --won't wait forever for anything. We're not the norm. the-important-audience-isn't-pythondev-it's-pythonlist-ly y'rs - tim From mal at lemburg.com Fri Feb 16 09:56:15 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 09:56:15 +0100 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) References: 
                              
                              <3A8C563D.D9BB6E3E@lemburg.com> <200102152328.SAA32032@cj20424-a.reston1.va.home.com> 
                              
                              <200102152354.SAA32281@cj20424-a.reston1.va.home.com> Message-ID: <3A8CEB2F.2C4B35A4@lemburg.com> Guido van Rossum wrote: > > > > If it is truly opt-in (supposedly a configure option?), I'm all for > > > it. > > > > It is very much opt-in. > > > > > I recall vaguely though that Jeremy or Tim thought that the patch > > > touches lots of code even when one doesn't opt in. That was a no-no > > > so close before the a2 release. Anybody who actually looked at the > > > code got an opinion on that now? > > > > I suggest looking at the patch. Not at the code, but what it does as > > a diff: > > > > 1) Add a file Objects/obmalloc.c > > 2) Add stuff to configure.in & config.h to detect the --with-pymalloc > > argument to ./configure > > 3) Conditionally #include "obmalloc.h" in Objects/object.c if > > WITH_PYMALLOC is #defined > > 4) Conditionally #define the variables in Include/objimpl.h to #define > > the #defines needed to override the memory imiplementation if > > WITH_PYMALLOC is #defined > > > > And *that's it*. That's not my definition of "touches a lot of code". > > OK, I just looked, and I agree. BTW, for those who want to look, the > URL is: > > http://sourceforge.net/patch/?func=detailpatch&patch_id=101104&group_id=5470 > > This is currently assigned to Barry. Barry, can you see if this is > truly fit for inclusion? Or am I missing something? > > Note that there's a companion patch that adds a memory profiler: > > http://sourceforge.net/patch/?func=detailpatch&patch_id=101229&group_id=5470 > > Should this also be applied? Is there a reason why it shouldn't? Since both patches must be explicitely enabled by a configure switch I'd suggest to apply both of them -- this will give them much more testing. In the long run, I think that using such an allocator is better than trying maintain free lists for each type seperatly. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Fri Feb 16 10:24:41 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 04:24:41 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010215090551.J4924@xs4all.nl> Message-ID: 
                              
                              [Thomas Wouters] > ... > I think what's missing is a library *tutorial*. How would that differ from the effbot guide (to the std library)? The Python (language) Tutorial can be pretty small, because the Python language is pretty small. But the libraries are massive, and growing, and are increasingly in the hands of people with no Unix experience, or even programming experience. So I suppose "tutorial" can mean many things. > The reference is exactly that, a reference, In part. In other parts (a good example is the profile docs) it's a lot of everything; in others it's so much "a reference" you can't figure out what it's saying unless you study the code (the pre-2.1 "random" docs sure come to mind). It's no more consistent in content level than anything else with umpteen authors. > and if we expand the reference we'll end up cursing it ourself, > should we ever need it. If the people who wanted "just a reference" were happy, I don't think David Beazley would have found an audience for his "Python Essential Reference". I can't argue about this, though, because nobody will ever agree. Guido doesn't want leisurely docs in the Reference Manual, nor does he like leisurely docs in docstrings. OTOH, those are what average and sub-average programmers *need*, and I write docs for them first, sneaking in examples when possible that I hope even experts will find pleasure in pondering. A good compromise by my lights-- and perhaps because I only care about the HTML docs, where "size" isn't apparent or a problem for navigation --would be to follow a terse but accurate reference with as many subsections as felt needed, with examples and rationale and tutorial material (has anyone ever figured how to use rexec or bastion from the docs? heh). But since nobody will agree with that either, I stick everything into docstrings and leave it to Fred to throw away the most useful parts for the "real docs" 
                              
                              . > ... > I remember when I'd finished the Python tutorial and wondered where to > go next. I tried reading the library reference, but it was boring and > most of it not interesting (since it isn't built up to go from > seful/common -> rare, but just a list of all modules ordered by > service'.) Excellent point! I had the same question when I first learned Python, but at that time the libraries were maybe 10% of what's there now. I *still* didn't know where to go next. But I was pretty sure I didn't need the SGI multimedia libraries that occupied half the docs 
                              
                              . > ... > I'll write the library tutorial once I finish the 'from-foo-import-* > considered harmful' chapter ;-) Hmm. Feel free to finish the listcomp PEP too 
                              
                              . From mal at lemburg.com Fri Feb 16 10:53:50 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 10:53:50 +0100 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) References: 
                              
                              <3A8C563D.D9BB6E3E@lemburg.com> <200102152328.SAA32032@cj20424-a.reston1.va.home.com> 
                              
                              <200102152354.SAA32281@cj20424-a.reston1.va.home.com> <14988.43156.191949.342241@anthem.wooz.org> Message-ID: <3A8CF8AE.F819D17D@lemburg.com> "Barry A. Warsaw" wrote: > > >>>>> "GvR" == Guido van Rossum 
                              
                              writes: > > GvR> This is currently assigned to Barry. Barry, can you see if > GvR> this is truly fit for inclusion? Or am I missing something? > > I think I was wary of applying it without the chance to run it through > Insure when it was enabled. I can put that on my list of things to do > for beta1. That's a good idea, but why should it stop you from checking the patch in ? After all, it's opt-in, so people using it will know that they are building non-standard stuff. Perhaps we ought to add a note '(experimental)' to the configure flags ?! -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thomas.heller at ion-tof.com Fri Feb 16 11:28:02 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Fri, 16 Feb 2001 11:28:02 +0100 Subject: [Python-Dev] Modulefinder? Message-ID: <02be01c09803$23fbc400$e000a8c0@thomasnotebook> Who is maintaining freeze/Modulefinder? I have some issues I would like to discuss... Thomas (Heller) From andy at reportlab.com Fri Feb 16 12:56:09 2001 From: andy at reportlab.com (Andy Robinson) Date: Fri, 16 Feb 2001 11:56:09 -0000 Subject: [Python-Dev] Documentation Tools (was Unit Testing) In-Reply-To: 
                              
                              Message-ID: 
                              
                              > That's Fred Drake, who I've copied on this. Dinu and Fred > should talk > directly if there's a problem. Membership in the doc-sig > is open, and Fred > couldn't block it even if he wanted to. Don't worry, it got resolved, and the problem was not of human origin :-) - Andy From thomas at xs4all.net Fri Feb 16 13:22:41 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 16 Feb 2001 13:22:41 +0100 Subject: [Python-Dev] Unit testing (again) In-Reply-To: 
                              
                              ; from tim.one@home.com on Fri, Feb 16, 2001 at 04:24:41AM -0500 References: <20010215090551.J4924@xs4all.nl> 
                              
                              Message-ID: <20010216132241.L4924@xs4all.nl> On Fri, Feb 16, 2001 at 04:24:41AM -0500, Tim Peters wrote: > [Thomas Wouters] > > ... > > I think what's missing is a library *tutorial*. > > How would that differ from the effbot guide (to the std library)? Not much, I bet, though I have to admit I haven't actually read the effbot guide ;-) It's just that going from the tutorial to the effbot guide (or any other book) is a fair-sized step, given that there are no pointers to them from the tutorial. I can't even *get* to the effbot guide from the documentation page (not with a decent number of clicks, anyway), not even through the PSA bookstore. > If the people who wanted "just a reference" were happy, I don't think David > Beazley would have found an audience for his "Python Essential Reference". Well, I never bought David's reference :) I only ever bought Programming Python, mostly because I saw it in a bookshop while I was in a post-tutorial, pre-usenet state ;) I'm also semi-permanently attached to the 'net, so the online docs at www.python.org are my best friend (next to docstrings, of course.) > A good compromise by my lights-- and perhaps because I only care about the > HTML docs, where "size" isn't apparent or a problem for navigation --would > be to follow a terse but accurate reference with as many subsections as felt > needed, with examples and rationale and tutorial material (has anyone ever > figured how to use rexec or bastion from the docs? heh). Definately +1 on that idea, well received or not it might be by others :) -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From gregor at mediasupervision.de Fri Feb 16 13:34:16 2001 From: gregor at mediasupervision.de (Gregor Hoffleit) Date: Fri, 16 Feb 2001 13:34:16 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <20010205164557.B990@thrak.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Mon, Feb 05, 2001 at 04:45:57PM -0500 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <20010216133416.A19356@mediasupervision.de> On Mon, Feb 05, 2001 at 04:45:57PM -0500, Andrew Kuchling wrote: > A more critical issue might be why people haven't adopted 2.0 yet; > there seems little reason is there to continue using 1.5.2, yet I > still see questions on the XML-SIG, for example, from people who > haven't upgraded. Is it that Zope doesn't support it? Or that Red > Hat and Debian don't include it? This needs fixing, or else we'll > wind up with a community scattered among lots of different versions. Sorry, I only got aware of this discussion when I read the recent python-dev summary. Here's the official word from Debian about this: Debian's unstable tree currently includes both Python 1.5.2 as well as 2.0. Python 1.5.2 things are packaged as python-foo-bar, while Python 2.0 is available as python2-foo-bar. It's possible to install either 1.5.2 or 2.0 or both of them. I have described the reasons for this dual packaging in /usr/share/doc/python2/README.why-python2 (included below): it's mainly about (a) backwards compatibility and (b) the license issue (the questionable GPL compatibility of the new license). The current setup shows a preference for the Python 1.5.2 packages: python1.5.2 is linked to /usr/bin/python, while python2.0 is linked to /usr/bin/python2; a simple upgrade won't install Python 2.0, but will stick with Python 1.5.2. Furthermore, python-base is now a "standard" package in Debian woody (will be installed by default on most systems!), while python2-base is only "optional". I made this setup to enforce maintainers of other packages to check if their package was compatible with Python 2.0, and--important as well--if they thought that the license of their package was compatible with the new Python license. (a) is clearly only a temporary issue (with Zope being a big point currently) and will go away over the time. (b) is much more difficult, and won't simply vanish over time. I know that most of you guys are fed up with license discussions. Still, I dare to bring this back to your attentions: Most people seem to ignore the fact that the FSF considers the new Python license incompatible with the GPL--the FSF might be wrong in fact, but I think it's not a fair way of dealing with licenses to simply *ignore* their words. If somebody could give me a legal advice that the Python license is in fact compatible with the GPL, and if this was accepted by the guys at debian-legal at lists.debian.org, I would happily adopt this opinion and that would make (b) go away as well. Until this happens, I think the best way for Debian to handle this situation (clearly not perfect!) is to use a per-case judgement--if there's GPL code in a package, ask the author if it's okay to use it with Python2 code. If he says alright, go on with packaging. If he says nogo (as the FSF did for readline), do away with the package (therefore python2-base doesn't include readline support). Gregor README.why-python2: ------------------ Why python2 ? ------------- Why are the Debian packages of Python 2.x called python2-base etc. instead of simply replacing the old python-base packages of version 1.5.2 ? Debian provides two sets of Python packages: - python-base etc. provides Python 1.5.2 - python2-base etc. provides Python 2.x. There are two major reasons for this: 1.) The transition from Python 1.5.2 to 2.0 is not completely flawless. There are a few incompatible changes in 2.0 that tend to break applications. E.g. Zope 2.2.5 is not yet prepared to work with Python 2.0. By providing both packages for Python 1.5.2 (python-*) and Python 2.0 (python2-*), the transition is much easier. 2.) The license of Python 2.0 has been changed, and restricted in some ways. According to the FSF, the license of Python 2.x is incompatible with the conditions of the General Public License (GPL). According to the FSF, the license of Python 2.x doesn't grant the licensee enough freedoms to use such code in a derived work together with code licensed under the GPL--this would result in a violation of the GPL. Other parties deny that this is indeed a violation of the GPL. Debian uses a significant portion of GPL code for which the FSF owns the copyright. In order to avoid legal conflicts over this, the python2-* packages are set up in a way that no GPL code will be used by default. It's the duty of maintainers of other packages to check if their license if compatible with the Python 2.x license, and then to repackage it accordingly (cf. python2/README.maintainers for hints). Jan 11, 2001 Gregor Hoffleit 
                              
                              Last modified: 2000-01-11 From mal at lemburg.com Fri Feb 16 13:51:14 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 13:51:14 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> Message-ID: <3A8D2242.49966DD4@lemburg.com> Gregor Hoffleit wrote: > > If somebody could give me a legal advice that the Python license is in fact > compatible with the GPL, and if this was accepted by the guys at > debian-legal at lists.debian.org, I would happily adopt this opinion and that > would make (b) go away as well. > > Until this happens, I think the best way for Debian to handle this situation > (clearly not perfect!) is to use a per-case judgement--if there's GPL code > in a package, ask the author if it's okay to use it with Python2 code. If he > says alright, go on with packaging. Say, what kind of clause is needed in licenses to make them explicitly GPL-compatible without harming the license conditions in all other cases where the GPL is not involved ? > If he says nogo (as the FSF did for > readline), do away with the package (therefore python2-base doesn't include > readline support). Oh boy... about time we switch to editline as the default line editing package. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From gregor at mediasupervision.de Fri Feb 16 14:27:37 2001 From: gregor at mediasupervision.de (Gregor Hoffleit) Date: Fri, 16 Feb 2001 14:27:37 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <3A8D2242.49966DD4@lemburg.com>; from mal@lemburg.com on Fri, Feb 16, 2001 at 01:51:14PM +0100 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> Message-ID: <20010216142737.D30936@mediasupervision.de> On Fri, Feb 16, 2001 at 01:51:14PM +0100, M.-A. Lemburg wrote: > Gregor Hoffleit wrote: > > > > If somebody could give me a legal advice that the Python license is in fact > > compatible with the GPL, and if this was accepted by the guys at > > debian-legal at lists.debian.org, I would happily adopt this opinion and that > > would make (b) go away as well. > > > > Until this happens, I think the best way for Debian to handle this situation > > (clearly not perfect!) is to use a per-case judgement--if there's GPL code > > in a package, ask the author if it's okay to use it with Python2 code. If he > > says alright, go on with packaging. > > Say, what kind of clause is needed in licenses to make them explicitly > GPL-compatible without harming the license conditions in all other > cases where the GPL is not involved ? Hmm, during the great KDE confusion (KDE was GPL, and Qt was not compatible with the GPL), it was suggested that the authors of the KDE code should add this clause to their license boiler plate (cf. http://www.debian.org/News/1998/19981008): `This program is distributed under the GNU GPL v2, with the additional permission that it may be linked against Troll Tech's Qt library, and distributed, without the GPL applying to Qt'' (By the way, even the FSF uses a similar clause in the glibc license. The glibc license is the usual pointer to the GPL plus this clause: "As a special exception, if you link this library with files compiled with a GNU compiler to produce an executable, this does not cause the resulting executable to be covered by the GNU General Public License. This exception does not however invalidate any other reasons why the executable file might be covered by the GNU General Public License.") If you add something similar to your GPL code, that should work for the Python license, too. Evidently (cf. the URL above for an elaboration), the problem is that only the copyright holder of the code can add this clause. Your code with be perfectly compatible with pure GPL code, and it would be compatible with Python2 code. It would not be possible, though, to mix in some other pure GPL code, and link that with Python2 code--since the pure GPL code's license doesn't permit that. Silly, not ?? ;-) Gregor From thomas at xs4all.net Fri Feb 16 15:14:17 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 16 Feb 2001 15:14:17 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <20010216142737.D30936@mediasupervision.de>; from gregor@mediasupervision.de on Fri, Feb 16, 2001 at 02:27:37PM +0100 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> Message-ID: <20010216151417.M4924@xs4all.nl> On Fri, Feb 16, 2001 at 02:27:37PM +0100, Gregor Hoffleit wrote: > (By the way, even the FSF uses a similar clause in the glibc license. The > glibc license is the usual pointer to the GPL plus this clause: > "As a special exception, if you link this library with files > compiled with a GNU compiler to produce an executable, this does > not cause the resulting executable to be covered by the GNU General > Public License. This exception does not however invalidate any > other reasons why the executable file might be covered by the GNU > General Public License.") So... if you link glibc with files compiled by a NON-GNU compiler, the resulting binary *has to be* glibc ? That's, well, fucked, if you pardon my french. But it's not my code, so all I can do is sigh 
                              
                              ;-P > Evidently (cf. the URL above for an elaboration), the problem is that only > the copyright holder of the code can add this clause. Exactly. In this case, it's CNRI that dictates the licence, and they apparently are/were not convinced the license *isn't* compatible with the GPL, so they see no need to further muddle (or reduce the strength of) their licence. > Silly, not ?? ;-) Definately. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mal at lemburg.com Fri Feb 16 15:34:07 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 15:34:07 +0100 Subject: [Python-Dev] Re: Python 2.0 in Debian References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> Message-ID: <3A8D3A5F.C9CD094C@lemburg.com> Gregor Hoffleit wrote: > > On Fri, Feb 16, 2001 at 01:51:14PM +0100, M.-A. Lemburg wrote: > > Gregor Hoffleit wrote: > > > > > > If somebody could give me a legal advice that the Python license is in fact > > > compatible with the GPL, and if this was accepted by the guys at > > > debian-legal at lists.debian.org, I would happily adopt this opinion and that > > > would make (b) go away as well. > > > > > > Until this happens, I think the best way for Debian to handle this situation > > > (clearly not perfect!) is to use a per-case judgement--if there's GPL code > > > in a package, ask the author if it's okay to use it with Python2 code. If he > > > says alright, go on with packaging. > > > > Say, what kind of clause is needed in licenses to make them explicitly > > GPL-compatible without harming the license conditions in all other > > cases where the GPL is not involved ? > > Hmm, during the great KDE confusion (KDE was GPL, and Qt was not compatible > with the GPL), it was suggested that the authors of the KDE code should add > this clause to their license boiler plate (cf. > http://www.debian.org/News/1998/19981008): > > `This program is distributed under the GNU GPL v2, with the > additional permission that it may be linked against Troll Tech's Qt > library, and distributed, without the GPL applying to Qt'' Uhm, that's backwards from what I had in mind with the question. Sorry for not being more to the point. Here's the "problem" I have: I want to put my code under a license similar to the Python 2 license (that is including the choice of law clause which caused all this trouble). Since some of my code is already being used by GPL-software out there,I would like to add some kind of extra-clause to the license which permits the GPL-code authors to the new versions as well. This is somewhat similar to the problem that Python2 has with the GPL; don't know how CNRI is going to change the license for 1.6.1, but I want to include something similar in my license. Anyway, since Debian is very sensitive to these issues, I thought I'd ask you for a possible way out. Thanks, -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From gregor at mediasupervision.de Fri Feb 16 15:51:26 2001 From: gregor at mediasupervision.de (Gregor Hoffleit) Date: Fri, 16 Feb 2001 15:51:26 +0100 Subject: [Python-Dev] Re: Python 2.0 in Debian In-Reply-To: <3A8D3A5F.C9CD094C@lemburg.com>; from mal@lemburg.com on Fri, Feb 16, 2001 at 03:34:07PM +0100 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> <3A8D3A5F.C9CD094C@lemburg.com> Message-ID: <20010216155125.E30936@mediasupervision.de> On Fri, Feb 16, 2001 at 03:34:07PM +0100, M.-A. Lemburg wrote: > Here's the "problem" I have: I want to put my code under a license > similar to the Python 2 license (that is including the choice of > law clause which caused all this trouble). Why don't you simply remove the first sentence of this clause ("This License Agreement shall be governed by and interpreted in all respects by the law of the State of Virginia, excluding conflict of law provisions.") ? Is there any reason for you to include this choice of law clause anyway, if you don't live in Virginia ? Gregor > Since some of my code is already being used by GPL-software > out there,I would like to add some kind of extra-clause to > the license which permits the GPL-code authors to the new versions > as well. > > This is somewhat similar to the problem that Python2 has with the GPL; > don't know how CNRI is going to change the license for 1.6.1, but I > want to include something similar in my license. From mal at lemburg.com Fri Feb 16 16:24:03 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 16:24:03 +0100 Subject: [Python-Dev] Re: Python 2.0 in Debian References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> <3A8D3A5F.C9CD094C@lemburg.com> <20010216155125.E30936@mediasupervision.de> Message-ID: <3A8D4613.551021EB@lemburg.com> Gregor Hoffleit wrote: > > On Fri, Feb 16, 2001 at 03:34:07PM +0100, M.-A. Lemburg wrote: > > Here's the "problem" I have: I want to put my code under a license > > similar to the Python 2 license (that is including the choice of > > law clause which caused all this trouble). > > Why don't you simply remove the first sentence of this clause ("This License > Agreement shall be governed by and interpreted in all respects by the law of > the State of Virginia, excluding conflict of law provisions.") ? > > Is there any reason for you to include this choice of law clause anyway, if > you don't live in Virginia ? I have to make the governing law the German law since that is where my company is located. The text from my version is: """ This License Agreement shall be governed by and interpreted in all respects by the law of Germany, excluding conflict of law provisions. It shall not be governed by the United Nations Convention on Contracts for International Sale of Goods. """ Does anyone know of the wording of the new 1.6.1 license ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From fdrake at acm.org Fri Feb 16 16:23:18 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 16 Feb 2001 10:23:18 -0500 (EST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: 
                              
                              References: <000801c097a0$41397520$e46940d5@hagrid> 
                              
                              Message-ID: <14989.17894.829429.368417@cj42289-a.reston1.va.home.com> Tim Peters writes: > Oh, ya, "[" has to be excluded because the listcomp itself temporarily > creates an artificial name beginning with "[". Wow! Perhaps listcomps should use names like _[1] instead, just to reduce the number of special cases. -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From gregor at mediasupervision.de Fri Feb 16 16:47:44 2001 From: gregor at mediasupervision.de (Gregor Hoffleit) Date: Fri, 16 Feb 2001 16:47:44 +0100 Subject: [Python-Dev] Re: Python 2.0 in Debian In-Reply-To: <3A8D4613.551021EB@lemburg.com>; from mal@lemburg.com on Fri, Feb 16, 2001 at 04:24:03PM +0100 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> <3A8D3A5F.C9CD094C@lemburg.com> <20010216155125.E30936@mediasupervision.de> <3A8D4613.551021EB@lemburg.com> Message-ID: <20010216164744.F30936@mediasupervision.de> On Fri, Feb 16, 2001 at 04:24:03PM +0100, M.-A. Lemburg wrote: > Gregor Hoffleit wrote: > > Is there any reason for you to include this choice of law clause anyway, if > > you don't live in Virginia ? > > I have to make the governing law the German law since that is where > my company is located. The text from my version is: > > """ > This License Agreement shall be governed by and interpreted in all > respects by the law of Germany, excluding conflict of law > provisions. It shall not be governed by the United Nations Convention > on Contracts for International Sale of Goods. > """ Well, I guess that beyond my legal scope (why is it reasonable to exclude that UN Convention ?), and certainly it gets quite off-topic on this list. Is it really necessary to make a choice of law, and how does it help you? (I mean, the GPL, the X11 license, BSD-like licenses, the Apache license and the old Python license all work without such a clause). AFAIK, RMS and his lawyer say that any restriction on the choice of law is incompatible with the GPL, therefore I don't see how you could include such a clause in the license and still make it compatible with the GPL. If you're interested in some opinions from Debian, would you mind to send a mail to debian-legal at lists.debian.org and ask there for comments ? Have you considered mailing to licensing at gnu.org and ask them for their opinion ? > > Does anyone know of the wording of the new 1.6.1 license ? I didn't even knew there will be a 1.6.1 release. Will there be a change in the license ? Gregor From fdrake at acm.org Fri Feb 16 17:19:28 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 16 Feb 2001 11:19:28 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010216132241.L4924@xs4all.nl> References: <20010215090551.J4924@xs4all.nl> 
                              
                              <20010216132241.L4924@xs4all.nl> Message-ID: <14989.21264.954177.217422@cj42289-a.reston1.va.home.com> On Fri, Feb 16, 2001 at 04:24:41AM -0500, Tim Peters wrote: > be to follow a terse but accurate reference with as many subsections as felt > needed, with examples and rationale and tutorial material (has anyone ever > figured how to use rexec or bastion from the docs? heh). Thomas Wouters writes: > Definately +1 on that idea, well received or not it might be by others :) So what sections can I expect you two to write for the Python 2.1 documentation? -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From sdm7g at virginia.edu Fri Feb 16 18:32:49 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 16 Feb 2001 12:32:49 -0500 (EST) Subject: [Python-Dev] platform specific files Message-ID: 
                              
                              On macosx, besides the PyObjC (i.e.NextStep/OpenStep/Cocoa) module, I now have a good chunk of the MacOS Carbon based toolkit modules ported (though not tested): Python 2.1a2 (#1, 02/12/01, 19:49:54) [GCC Apple DevKit-based CPP 5.0] on Darwin1.2 Type "copyright", "credits" or "license" for more information. >>> import Carbon >>> dir(Carbon) ['AE', 'App', 'Cm', 'ColorPicker', 'Ctl', 'Dlg', 'Drag', 'Evt', 'Fm', 'HtmlRender', 'Icn', 'List', 'Menu', 'Qd', 'Qdoffs', 'Res', 'Scrap', 'Snd', 'TE', 'Win', '__doc__', '__file__', '__name__', 'macfs'] >>> Jack has always maintained the Mac distribution separately, but that was largely because the Metrowerks compiler environment was radically different from unix make/gcc and friends. That's no longer the case on macosx. ( Although, it looks like we will end up, for a while, at least, with 3 versions on OSX: Classic, Carbonized-MacPython, and the unix build of Python with Carbon and Cocoa libs. ) I note that 2.1a2 still has BeOS and PC specific directories, although the Nt & sgi directories that were in older releases are gone. I'm guessing the current wish is to keep as much platform dependent stuff as possible separate and managed with disutils, and construct separate platform-specific distributions my merging them on each release. How is all of this handled in the various Windows distributions ? ( And in the light of that, is there anything particular I should avoid? ) -- Steve M. From skip at mojam.com Fri Feb 16 19:28:06 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 16 Feb 2001 12:28:06 -0600 (CST) Subject: [Python-Dev] Re: Upgrade? Not for some time... (fwd) Message-ID: <14989.28982.533172.930519@beluga.mojam.com> FYI, for those of you who don't read c.l.py on a regular basis. Skip -------------- next part -------------- An embedded message was scrubbed... From: Steve Purcell 
                              
                              Subject: Re: Upgrade? Not for some time... Date: Fri, 16 Feb 2001 09:35:38 +0100 Size: 2595 URL: 
                              
                              From moshez at zadka.site.co.il Fri Feb 16 19:34:37 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Fri, 16 Feb 2001 20:34:37 +0200 (IST) Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <20010216151417.M4924@xs4all.nl> References: <20010216151417.M4924@xs4all.nl>, <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> Message-ID: <20010216183437.4C374A840@darjeeling.zadka.site.co.il> On Fri, 16 Feb 2001 15:14:17 +0100, Thomas Wouters 
                              
                              wrote: > So... if you link glibc with files compiled by a NON-GNU compiler, the > resulting binary *has to be* glibc ? That's, well, fucked, if you pardon my > french. But it's not my code, so all I can do is sigh 
                              
                              ;-P Thomas, glibc is not currently supported on any non-GNU systems (and for the sake of this discussion, NetBSD/FreeBSD/OpenBSD are GNU systems too, since the only compiler that works there is gcc) More, glibc uses so many gcc extensions that you probably will have a hard time getting it to compile with any other compiler. -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez at debian.org |http://www.{python,debian,gnu}.org From jeremy at alum.mit.edu Fri Feb 16 20:27:36 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 16 Feb 2001 14:27:36 -0500 (EST) Subject: [Python-Dev] __all__ for pickle Message-ID: <14989.32552.239767.38203@w221.z064000254.bwi-md.dsl.cnc.net> I was just testing Zope with the latest CVS python and ran into trouble with the pickle module. The module has grown an __all__ attribute: __all__ = ["PickleError", "PicklingError", "UnpicklingError", "Pickler", "Unpickler", "dump", "dumps", "load", "loads"] This definition excludes a lot of other names defined at the module level, like all of the constants for the pickle format, e.g. MARK, STOP, POP, PERSID, etc. It also excludes format_version and compatible_formats. I don't understand why these names were excluded from __all__. The Zope code uses "from pickle import *" and writes a custom pickler extension. It needs to have access to these names to be compatible, and I can't think of a good reason to forbid it. What's the right solution? Zap the __all__ attribute; the namespace pollution that results is fairly small (marshal, sys, struct, the contents of tupes). Make __all__ a really long list? I wonder how much breakage we should impose on people who use "from ... import *" for Python 2.1. As you know, I was an early advocate of the it's-sloppy-so-let-em-suffer philosophy, but I have learned the error of my ways. I worry that people will be unhappy with __all__ if other modules suffer from similar code breakage. Has __all__ been described by a PEP? If so, it ought to be posted to c.l.py for discussion. If not, we should probably write a short PEP. It would probably be a page of text, but it would help clarify that confusion that persists about what __all__ is for and what its consequences are. Jeremy From tim.one at home.com Fri Feb 16 20:53:09 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 14:53:09 -0500 Subject: [Python-Dev] __all__ for pickle In-Reply-To: <14989.32552.239767.38203@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              [Jeremy Hylton] > ... > Has __all__ been described by a PEP? No. IIRC, it popped up when Guido approved of a bulletproof __exports__ patch, and subsequent complaints revealed that was controversial. Then __all__ somehow made it in without opposition, in analogy with the special __all__ attribute of __init__.py files (which doesn't appear to have made it into the Lang or Lib refs, but is documented in Guido's package essay on python.org, and in the Tutorial(?!)). > ... > If not, we should probably write a short PEP. It would probably > be a page of text, but it would help clarify that confusion that > persists about what __all__ is for and what its consequences are. I agree, but if someone can make time for that I'd much rather see Guido's package essay folded into the Lang Ref first. Packages have been part of the language since 1.5 ... From mal at lemburg.com Fri Feb 16 21:17:51 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 21:17:51 +0100 Subject: [Python-Dev] __all__ for pickle References: <14989.32552.239767.38203@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A8D8AEF.3233507F@lemburg.com> Jeremy Hylton wrote: > > I was just testing Zope with the latest CVS python and ran into > trouble with the pickle module. > > The module has grown an __all__ attribute: > > __all__ = ["PickleError", "PicklingError", "UnpicklingError", "Pickler", > "Unpickler", "dump", "dumps", "load", "loads"] > > This definition excludes a lot of other names defined at the module > level, like all of the constants for the pickle format, e.g. MARK, > STOP, POP, PERSID, etc. It also excludes format_version and > compatible_formats. > > I don't understand why these names were excluded from __all__. The > Zope code uses "from pickle import *" and writes a custom pickler > extension. It needs to have access to these names to be compatible, > and I can't think of a good reason to forbid it. I guess it was a simple oversight. Why not add the constants to the above list ?! > What's the right solution? Zap the __all__ attribute; the namespace > pollution that results is fairly small (marshal, sys, struct, the > contents of tupes). Make __all__ a really long list? The latter, I guess. Some lambda hackery ought to fix this elegantly. > I wonder how much breakage we should impose on people who use "from > ... import *" for Python 2.1. As you know, I was an early advocate of > the it's-sloppy-so-let-em-suffer philosophy, but I have learned the > error of my ways. I worry that people will be unhappy with __all__ if > other modules suffer from similar code breakage. IMHO, we should try to get this right now, rather than later. The betas will get enough testing to reduce the breakage below the noise level. If there's still serious breakage somewhere, then patches will be simple: just comment out the __all__ definition -- even newbies will be able to do this (and feel great about it ;-). Besides, the __all__ attribute is a good way to define a module API and certainly can be put to good use in future Python versions, e.g. by declaring those module attribute read-only and pre-fetching them into function locals... > Has __all__ been described by a PEP? If so, it ought to be posted to > c.l.py for discussion. If not, we should probably write a short PEP. > It would probably be a page of text, but it would help clarify that > confusion that persists about what __all__ is for and what its > consequences are. No, there's no PEP for it. The reason is that __all__ has been in existence for quite a few years already. Previously it was just used for packages -- the patch just extended it's scope to simple modules. It is documented in the tutorial and the API docs, plus in Guido's essays. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thomas at xs4all.net Fri Feb 16 21:37:52 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 16 Feb 2001 21:37:52 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <20010216183437.4C374A840@darjeeling.zadka.site.co.il>; from moshez@zadka.site.co.il on Fri, Feb 16, 2001 at 08:34:37PM +0200 References: <20010216151417.M4924@xs4all.nl>, <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> <20010216151417.M4924@xs4all.nl> <20010216183437.4C374A840@darjeeling.zadka.site.co.il> Message-ID: <20010216213751.F22571@xs4all.nl> On Fri, Feb 16, 2001 at 08:34:37PM +0200, Moshe Zadka wrote: > On Fri, 16 Feb 2001 15:14:17 +0100, Thomas Wouters 
                              
                              wrote: > > So... if you link glibc with files compiled by a NON-GNU compiler, the > > resulting binary *has to be* glibc [I meant GPL] ? That's, well, fucked, > > if you pardon my french. But it's not my code, so all I can do is sigh > > 
                              
                              ;-P > Thomas, glibc is not currently supported on any non-GNU systems (and for the > sake of this discussion, NetBSD/FreeBSD/OpenBSD are GNU systems too, since > the only compiler that works there is gcc) > More, glibc uses so many gcc extensions that you probably will have a hard > time getting it to compile with any other compiler. That depends. Is a fork of gcc, sprouting all the features of gcc, a GNU compiler ? We're not talking technicalities here, we're talking legalities. "What's in a name" is no longer a rhetorical question in that context :) -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Fri Feb 16 21:56:03 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 15:56:03 -0500 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <20010216133416.A19356@mediasupervision.de> Message-ID: 
                              
                              [Gregor Hoffleit] > ... > I know that most of you guys are fed up with license discussions. Still, > I dare to bring this back to your attentions: Don't apologize -- the license remains an important issue to the Python developers too. We rarely mention it in public anymore simply because there's not yet anything new to say, while everything old has already been repeated countless times. > Most people seem to ignore the fact that the FSF considers the new Python > license incompatible with the GPL--the FSF might be wrong in fact, but I > think it's not a fair way of dealing with licenses to simply *ignore* > their words. Absolutely, and until this is resolved I urge that-- regardless of the legalities, and unless you're looking to pick a legal fight --everyone presume the copyright holder's position is correct. For me that's got nothing to do with the law, it's simply respecting the wishes of the people who own the code. > If somebody could give me a legal advice that the Python license > is in fact compatible with the GPL, and if this was accepted by the > guys at debian-legal at lists.debian.org, I would happily adopt this > opinion and that would make (b) go away as well. Let's not even go there. Nothing legal is ever settled "for good" in the US. This tack is hopeless. The FSF and CNRI are still talking about this! There's still hope that it will be resolved between them. If they can agree on a compromise, we'll move as quickly as possible to implement it. Indeed, those who read the Python checkin msgs have hints that we're very optimistic about a friendly resolution. But we've got no control over when that may happen, and the negotiations so far have proceeded at a pace that can only be described as glacial. > ... > Until this happens, I think the best way for Debian to handle this > situation (clearly not perfect!) is to use a per-case judgement--if > there's GPL code in a package, ask the author if it's okay to use > it with Python2 code. If he says alright, go on with packaging. If > he says nogo (as the FSF did for readline), do away with the package > (therefore python2-base doesn't include readline support). I personally agree that's the best compromise we can get for now, and greatly appreciate your willingness to endure this much special-case fiddling on Python's behalf! We'll continue to do all that we can to ensure that you won't have to endure this the next time around. although-that's-rather-like-saying-we'll-do-all-we-can-to-ensure- the-sun-doesn't-go-nova
                              
                              -ly y'rs - tim From tim.one at home.com Fri Feb 16 22:24:10 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 16:24:10 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14989.21264.954177.217422@cj42289-a.reston1.va.home.com> Message-ID: 
                              
                              [Fred L. Drake, Jr.] > So what sections can I expect you two to write for the Python 2.1 > documentation? I'm waiting for you to clear the backlog of the ones I've already written 
                              
                              . From tim.one at home.com Fri Feb 16 22:45:01 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 16:45:01 -0500 Subject: [Python-Dev] Re: Python 2.0 in Debian In-Reply-To: <20010216164744.F30936@mediasupervision.de> Message-ID: 
                              
                              [Gregor Hoffleit] > I didn't even knew there will be a 1.6.1 release. Will there be a > change in the license ? There will be a 1.6.1 release if and only if CNRI and the FSF reach agreement. If and when that happens, we (PythonLabs) will build a 1.6.1 release for CNRI with the new license, and then re-release the then-current Python as a derivative of 1.6.1. But it's premature to talk about that, because nothing is settled yet, and it doesn't address the license inherited from BeOpen.com. MAL, a choice-of-clause clause won't work any better for you (in the FSF's eyes) than it did for CNRI. Gregor, legal language is ambiguous. That's why virtually all *commercial* licenses in the US contain a choice-of-law clause ("of the 50 possible meanings of this phrase, I intended this specific one"). *If* and when somebody actually prevails in suing an open source provider due to the lack of choice-of-law, non-commercial providers will have a lot to think about here (it's easy to be complacent when you've never been burned). Here's a paradox: the FSF objects to choice-of-law because they don't want the language interpreted by the courts in the Kingdom of Unfreedonia (who could effectively negate the GPL's intent). CNRI objects to not having choice-of-law because they don't want the language interpreted by the courts in the Kingdom of Unlimited Liability (who could effectively negate all of CNRI's liability disclaimers). So in that sense, they're both seeking similar ends. That's why there's still hope for compromise. it-would-be-interesting-if-it-were-happening-to-somebody-else
                              
                              -ly y'rs - tim From tim.one at home.com Fri Feb 16 22:55:45 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 16:55:45 -0500 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <3A8D2242.49966DD4@lemburg.com> Message-ID: 
                              
                              [M.-A. Lemburg] > Say, what kind of clause is needed in licenses to make them explicitly > GPL-compatible without harming the license conditions in all other > cases where the GPL is not involved ? You can dual-license (see, e.g., Perl). From skip at mojam.com Fri Feb 16 23:00:02 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 16 Feb 2001 16:00:02 -0600 (CST) Subject: [Python-Dev] Re: __all__ for pickle In-Reply-To: <14989.32552.239767.38203@w221.z064000254.bwi-md.dsl.cnc.net> References: <14989.32552.239767.38203@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14989.41698.490018.793622@beluga.mojam.com> Jeremy> I was just testing Zope with the latest CVS python and ran into Jeremy> trouble with the pickle module. Jeremy> The module has grown an __all__ attribute: Jeremy> __all__ = ["PickleError", "PicklingError", "UnpicklingError", "Pickler", Jeremy> "Unpickler", "dump", "dumps", "load", "loads"] Jeremy> This definition excludes a lot of other names defined at the Jeremy> module level, like all of the constants for the pickle format, Jeremy> e.g. MARK, STOP, POP, PERSID, etc. It also excludes Jeremy> format_version and compatible_formats. In deciding what to include in __all__ up to this point I have only had my personal experience with the modules and the documentation to help me decide what to include. My initial assumption was that undocumented module-level constants were not to be exported. I just added the following to my version of pickle: __all__.extend([x for x in dir() if re.match("[A-Z][A-Z0-9_]*$",x)]) That seems to catch all the defined constants. Let me know if that's sufficient in this case. Skip From tim.one at home.com Fri Feb 16 23:44:06 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 17:44:06 -0500 Subject: [Python-Dev] Re: __all__ for pickle In-Reply-To: <14989.41698.490018.793622@beluga.mojam.com> Message-ID: 
                              
                              [Skip Montanaro] > In deciding what to include in __all__ up to this point I have only had > my personal experience with the modules and the documentation to help > me decide what to include. My initial assumption was that undocumented > module-level constants were not to be exported. And it's been a very educational exercise! Thank you for pursuing it. The fact is we often don't know what authors intended to export, and it's Good to try to make that explicit. I'm still not sure I've got any use for __all__, though 
                              
                              . sure-"a-problem"-has-been-identified-but-not-sure-the-solution- has-been-ly y'rs - tim From mal at lemburg.com Fri Feb 16 23:22:23 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 23:22:23 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) References: 
                              
                              Message-ID: <3A8DA81F.55DCF038@lemburg.com> Tim Peters wrote: > > [M.-A. Lemburg] > > Say, what kind of clause is needed in licenses to make them explicitly > > GPL-compatible without harming the license conditions in all other > > cases where the GPL is not involved ? > > You can dual-license (see, e.g., Perl). Right and it looks as if this is the only way out: either you give people (including commercial users) more freedom in the use of the code and add a choice-of-law clause or you restrain usage to GPLed code and cross fingers that noone is going to sue the hell out of you... doesn't really matter if the opponent lives in Kingdom of Unlimited Liability or not -- the costs of finding out which law to apply and where to settle the dispute would already suffice to bring the open source developer down to his/her knees. Oh well, -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Sat Feb 17 06:31:31 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 17 Feb 2001 00:31:31 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <14989.17894.829429.368417@cj42289-a.reston1.va.home.com> Message-ID: 
                              
                              [Tim] > Oh, ya, "[" has to be excluded because the listcomp itself temporarily > creates an artificial name beginning with "[". [Fred L. Drake, Jr.] > Wow! Perhaps listcomps should use names like _[1] instead, just to > reduce the number of special cases. Well, it seems a terribly minor point ... so I dropped everything else and checked in a change to do just that 
                              
                              . every-now-&-again-you-gotta-do-something-just-cuz-it's-right-ly y'rs - tim From skip at mojam.com Sat Feb 17 16:29:34 2001 From: skip at mojam.com (Skip Montanaro) Date: Sat, 17 Feb 2001 09:29:34 -0600 (CST) Subject: [Python-Dev] Re: __all__ for pickle In-Reply-To: 
                              
                              References: <14989.41698.490018.793622@beluga.mojam.com> 
                              
                              Message-ID: <14990.39134.483892.880071@beluga.mojam.com> Tim> I'm still not sure I've got any use for __all__, though 
                              
                              . That may be true. I think the canonical case that is being defended against is a module-level symbol in one module obscuring a builtin, e.g.: # a.py def list(s): return s # b.py from a import * ... l = list(('a','b','c')) I suspect in the long-run there's a better way to accomplish this than adding __all__ to most Python modules, perhaps pylint. Which reminds me... I did write something once upon a time to catch symbols that hide builtins, only at more than the module level: http://musi-cal.mojam.com/~skip/python/hiding.py Skip From ping at lfw.org Sun Feb 18 11:43:45 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sun, 18 Feb 2001 02:43:45 -0800 (PST) Subject: [Python-Dev] Join python-iter@yahoogroups.com to discuss PEP 234 Message-ID: 
                              
                              Hello all, I just wanted to let you know that i'm trying to move the PEP 234 and iterator discussion over to Greg's mailing list, python-iter at yahoogroups.com. Greg set it up quite a while ago but i didn't have time to respond to anything then. Today i had time to send a few messages to the group and i'd like to start the discussion up again. If you're interested in talking about it, please join! http://groups.yahoo.com/group/python-iter Thanks! -- ?!ng From barry at scottb.demon.co.uk Sun Feb 18 14:01:06 2001 From: barry at scottb.demon.co.uk (Barry Scott) Date: Sun, 18 Feb 2001 13:01:06 -0000 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model In-Reply-To: 
                              
                              Message-ID: <001001c099aa$daebf240$060210ac@private> > Here's a thought. How about BinaryFile/BinarySocket/ByteArray which > do Files and sockets often contain a both string and binary data. Having StringFile and BinaryFile seems the wrong split. I'd think being able to write string and binary data is more useful for example having methods on file and socket like file.writetext, file.writebinary. NOw I can use the writetext to write the HTTP headers and writebinary to write the JPEG image say. BArry From zessin at decus.de Sun Feb 18 17:23:26 2001 From: zessin at decus.de (zessin at decus.de) Date: Sun, 18 Feb 2001 17:23:26 +0100 Subject: [Python-Dev] OpenVMS import (was Re: Windows/Cygwin/MacOSX import (was RE: python-dev summary, 2001-02-01 - 2001-02-15) Message-ID: <009F7D57.F76B21F7.2@decus.de> Cameron Laird wrote: >In article 
                              
                              , >Tim Peters 
                              
                              wrote: >>[Michael Hudson] >>> ... >>> * Imports on case-insensitive file systems * >>> >>> There was quite some discussion about how to handle imports on a >>> case-insensitive file system (eg. on Windows). I didn't follow the >>> details, but Tim Peters is on the case (sorry), so I'm confident it >>> will get sorted out. >> >>You can be sure the whitespace will be consistent, anyway 
                              
                              . > . > . > . >>them is ugly. We're already supporting (and will continue to support) >>PYTHONCASEOK for their benefit, but they don't deserve multiple hacks in >>2001. >> >>Flame at will. >> >>or-flame-at-tim-your-choice-ly y'rs - tim > >1. Thanks. Along with all the other benefits, I find > this explanation FAR more entertaining than anything > network television broadcasts (although nearly as > tendentious as "The West Wing"). >2. I hope a few OS/400 and OpenVMS refugees convert and > walk through the door soon. *That* would make for a > nice dose of fun. Let's see if I can explain the OpenVMS part. I'll do so by walking over Tim's text. (I'll step carefully over it. I don't intend to destroy it, Tim ;-) ] Here's the scoop: file systems vary across platforms in whether or not they ] preserve the case of filenames, and in whether or not the platform C library ] file-opening functions do or don't insist on case-sensitive matches: ] ] ] case-preserving case-destroying ] +-------------------+------------------+ ] case-sensitive | most Unix flavors | brrrrrrrrrr | ] +-------------------+------------------+ ] case-insensitive | Windows | some unfortunate | ] | MacOSX HFS+ | network schemes | ] | Cygwin | | | | OpenVMS | ] +-------------------+------------------+ Phew. I'm glad we're only 'unfortunate' and not in the 'brrrrrrrrrr' section ;-) ] In the upper left box, if you create "fiLe" it's stored as "fiLe", and only ] open("fiLe") will open it (open("file") will not, nor will the 14 other ] variations on that theme). ] In the lower right box, if you create "fiLe", there's no telling what it's ] stored as-- but most likely as "FILE" --and any of the 16 obvious variations ] on open("FilE") will open it. >>> f = open ('fiLe', 'w') $ directory f* Directory DSA3:[PYTHON.PYTHON-2_1A2CVS.VMS] FILE.;1 >>> f = open ('filE', 'r') >>> f 
                              
                              >>> This is on the default file system (ODS-2). Only very recent versions of OpenVMS Alpha (V7.2 and up) support the ODS-5 FS that has Windows-like behaviour (case-preserving,case-insensitive), but many sites don't use it (yet). Also, there are many older versions running in the field that don't get upgraded any time soon. ] The lower left box is a mix: creating "fiLe" stores "fiLe" in the platform ] directory, but you don't have to match case when opening it; any of the 16 ] obvious variations on open("FILe") work. Same here. ] What's proposed is to change the semantics of Python "import" statements, ] and there *only* in the lower left box. ] ] Support for MaxOSX HFS+, and for Cygwin, is new in 2.1, so nothing is ] changing there. What's changing is Windows behavior. Here are the current ] rules for import on Windows: ] ] 1. Despite that the filesystem is case-insensitive, Python insists on ] a case-sensitive match. But not in the way the upper left box works: ] if you have two files, FiLe.py and file.py on sys.path, and do ] ] import file ] ] then if Python finds FiLe.py first, it raises a NameError. It does ] *not* go on to find file.py; indeed, it's impossible to import any ] but the first case-insensitive match on sys.path, and then only if ] case matches exactly in the first case-insensitive match. For OpenVMS I have just changed 'import.c': MatchFilename() and some code around it is not executed. ] 2. An ugly exception: if the first case-insensitive match on sys.path ] is for a file whose name is entirely in upper case (FILE.PY or ] FILE.PYC or FILE.PYO), then the import silently grabs that, no matter ] what mixture of case was used in the import statement. This is ] apparently to cater to miserable old filesystems that really fit in ] the lower right box. But this exception is unique to Windows, for ] reasons that may or may not exist 
                              
                              . I guess that is Windows-specific code? Something to do with 'allcaps8x3()'? ] 3. And another exception: if the envar PYTHONCASEOK exists, Python ] silently grabs the first case-insensitive match of any kind. The check is in 'check_case()', but there is no OpenVMS implementation (yet). ] So these Windows rules are pretty complicated, and neither match the Unix ] rules nor provide semantics natural for the native filesystem. That makes ] them hard to explain to Unix *or* Windows users. Nevertheless, they've ] worked fine for years, and in isolation there's no compelling reason to ] change them. ] However, that was before the MacOSX HFS+ and Cygwin ports arrived. They ] also have case-preserving case-insensitive filesystems, but the people doing ] the ports despised the Windows rules. Indeed, a patch to make HFS+ act like ] Unix for imports got past a reviewer and into the code base, which ] incidentally made Cygwin also act like Unix (but this met the unbounded ] approval of the Cygwin folks, so they sure didn't complain -- they had ] patches of their own pending to do this, but the reviewer for those balked). ] ] At a higher level, we want to keep Python consistent, and I in particular ] want Python to do the same thing on *all* platforms with case-preserving ] case-insensitive filesystems. Guido too, but he's so sick of this argument ] don't ask him to confirm that <0.9 wink>. What are you thinking about the 'unfortunate / OpenVMS' group ? Hey, it could be worse, could be 'brrrrrrrrrr'... ] The proposed new semantics for the lower left box: ] ] A. If the PYTHONCASEOK envar exists, same as before: silently accept ] the first case-insensitive match of any kind; raise ImportError if ] none found. ] ] B. Else search sys.path for the first case-sensitive match; raise ] ImportError if none found. ] ] #B is the same rule as is used on Unix, so this will improve cross-platform ] portability. That's good. #B is also the rule the Mac and Cygwin folks ] want (and wanted enough to implement themselves, multiple times, which is a ] powerful argument in PythonLand). It can't cause any existing ] non-exceptional Windows import to fail, because any existing non-exceptional ] Windows import finds a case-sensitive match first in the path -- and it ] still will. An exceptional Windows import currently blows up with a ] NameError or ImportError, in which latter case it still will, or in which ] former case will continue searching, and either succeed or blow up with an ] ImportError. ] ] #A is needed to cater to case-destroying filesystems mounted on Windows, and ] *may* also be used by people so enamored of "natural" Windows behavior that ] they're willing to set an envar to get it. That's their problem 
                              
                              . I ] don't intend to implement #A for Unix too, but that's just because I'm not ] clear on how I *could* do so efficiently (I'm not going to slow imports ] under Unix just for theoretical purity). ] ] The potential damage is here: #2 (matching on ALLCAPS.PY) is proposed to be ] dropped. Case-destroying filesystems are a vanishing breed, and support for ] them is ugly. We're already supporting (and will continue to support) ] PYTHONCASEOK for their benefit, but they don't deserve multiple hacks in ] 2001. Would using unique names be an acceptable workaround? ] Flame at will. ] ] or-flame-at-tim-your-choice-ly y'rs - tim No flame intended. Not at will and not at tim. >-- > >Cameron Laird 
                              
                              >Business: http://www.Phaseit.net >Personal: http://starbase.neosoft.com/~claird/home.html -- Uwe Zessin From skip at mojam.com Sun Feb 18 19:07:40 2001 From: skip at mojam.com (Skip Montanaro) Date: Sun, 18 Feb 2001 12:07:40 -0600 (CST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre.py,1.29,1.30 sre_compile.py,1.35,1.36 sre_parse.py,1.43,1.44 sre_constants.py,1.26,1.27 In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <14992.3948.171057.408517@beluga.mojam.com> Fredrik> - removed __all__ cruft from internal modules (sorry, skip) No need to apologize to me. __all__ was proposed and nobody started implementing it, so I took it on. As I get further into it I'm less convinced that it's the right way to go. It buys you a fairly small increase in "comfort level" with a fairly large cost. Skip From mal at lemburg.com Sun Feb 18 20:30:30 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sun, 18 Feb 2001 20:30:30 +0100 Subject: [Python-Dev] Activating SF mailing lists for PEP discussion. Message-ID: <3A9022D6.D60BE01@lemburg.com> Ping just recently posted a request here to discuss the iterator PEP on a yahoogroups mailing list. Since the move of eGroups under the Yahoo umbrella, joining those lists requires signing up with Yahoo -- with all strings attached. I don't know when they started this feature, but SourceForge now offers Mailman lists for the various projects. Wouldn't it be much simpler to open a mailing list for each PEP (possible on request only) ? That way, the archives would be kept in a cenral place and also in reach for other developers who are interested in the background discussions about the PEPs. Also, the PEPs could reference the mailing list archives to enhance the information availability. Thoughts ? I would appreciate if one of the Python SF admins would enable the feature and set up a mailing list for PEP 234 (iterators). Thanks, -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From fdrake at acm.org Sun Feb 18 20:29:58 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Sun, 18 Feb 2001 14:29:58 -0500 (EST) Subject: [Python-Dev] Activating SF mailing lists for PEP discussion. In-Reply-To: <3A9022D6.D60BE01@lemburg.com> References: <3A9022D6.D60BE01@lemburg.com> Message-ID: <14992.8886.425297.148106@cj42289-a.reston1.va.home.com> M.-A. Lemburg writes: > Ping just recently posted a request here to discuss the iterator > PEP on a yahoogroups mailing list. Since the move of eGroups under ... > Thoughts ? > > I would appreciate if one of the Python SF admins would enable the > feature and set up a mailing list for PEP 234 (iterators). I'd be glad to set up such a list, esp. if Ping and the members of the existing list opt to migrate to it. If people don't want to migrate, there's no need to set up a new list. Ping? -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From ping at lfw.org Sun Feb 18 20:39:30 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sun, 18 Feb 2001 11:39:30 -0800 (PST) Subject: [Python-Dev] Activating SF mailing lists for PEP discussion. In-Reply-To: <14992.8886.425297.148106@cj42289-a.reston1.va.home.com> Message-ID: 
                              
                              On Sun, 18 Feb 2001, Fred L. Drake, Jr. wrote: > M.-A. Lemburg writes: > > I would appreciate if one of the Python SF admins would enable the > > feature and set up a mailing list for PEP 234 (iterators). > > I'd be glad to set up such a list, esp. if Ping and the members of > the existing list opt to migrate to it. If people don't want to > migrate, there's no need to set up a new list. > Ping? Sure, that's fine. I had my reservations about using yahoogroups too, but since Greg had already established a list there i didn't want to duplicate his work. But i definitely agree that mailman is a better option. I've already forwarded copies of everyone's messages to yahoogroups, but after the new list is up i can do it again. -- ?!ng From martin at loewis.home.cs.tu-berlin.de Sun Feb 18 21:57:29 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Sun, 18 Feb 2001 21:57:29 +0100 Subject: [Python-Dev] Activating SF mailing lists for PEP discussion. Message-ID: <200102182057.f1IKvTB00992@mira.informatik.hu-berlin.de> > Wouldn't it be much simpler to open a mailing list for each PEP > (possible on request only) ? That was my first thought as well. The Python SF project does not currently use mailing lists because there was no need, but PEP discussion seems to be exactly the right usage of the feature. Regards, Martin From fdrake at acm.org Mon Feb 19 07:06:05 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Mon, 19 Feb 2001 01:06:05 -0500 (EST) Subject: [Python-Dev] Activating SF mailing lists for PEP discussion. In-Reply-To: 
                              
                              References: <14992.8886.425297.148106@cj42289-a.reston1.va.home.com> 
                              
                              Message-ID: <14992.47053.305380.752501@cj42289-a.reston1.va.home.com> Ka-Ping Yee writes: > Sure, that's fine. I had my reservations about using yahoogroups > too, but since Greg had already established a list there i didn't > want to duplicate his work. But i definitely agree that mailman > is a better option. I've just submitted the list-creation form for python-iterators at lists.sourceforge.net; I'll set you up as admin for the list once it exists (they say it takes 6-24 hours). -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From MarkH at ActiveState.com Mon Feb 19 10:38:24 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Mon, 19 Feb 2001 20:38:24 +1100 Subject: [Python-Dev] Modulefinder? In-Reply-To: <02be01c09803$23fbc400$e000a8c0@thomasnotebook> Message-ID: 
                              
                              [Thomas] > Who is maintaining freeze/Modulefinder? > > I have some issues I would like to discuss... [long silence] I guess this make it you then ;-) I wouldn't mind seeing this move into distutils as a module others could draw on. For example, "freeze" could be modifed by the next person game enough to touch it 
                              
                              to reference the module directly in the distutils package? It keeps the highly useful module alive, and makes "ownership" more obvious - whoever owns distutils also gets this baggage 
                              
                              Mark. From jack at oratrix.nl Mon Feb 19 12:20:21 2001 From: jack at oratrix.nl (Jack Jansen) Date: Mon, 19 Feb 2001 12:20:21 +0100 Subject: [Python-Dev] Demo/embed/import.c Message-ID: <20010219112022.9721F371690@snelboot.oratrix.nl> Can I request that the new file Demo/embed/import.c be renamed? The name clashes with the import.c we all know and love, and the setup of things under CodeWarrior on the Mac is such that it will search for sourcefiles recursively from the root of the Python sourcefolder. I can fix this, of course, but it's a lot of work... -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | ++++ see http://www.xs4all.nl/~tank/ ++++ From thomas.heller at ion-tof.com Mon Feb 19 14:46:54 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Mon, 19 Feb 2001 14:46:54 +0100 Subject: [Python-Dev] Modulefinder? References: 
                              
                              Message-ID: <00a401c09a7a$6d2060e0$e000a8c0@thomasnotebook> > [Thomas] > > Who is maintaining freeze/Modulefinder? > > > > I have some issues I would like to discuss... > > [long silence] > > I guess this make it you then ;-) > That's not what I wanted to hear ;-), but anyway, since you answered, I assume you have something to do with it. > I wouldn't mind seeing this move into distutils as a module others could > draw on. For example, "freeze" could be modifed by the next person game > enough to touch it 
                              
                              to reference the module directly in the distutils > package? > > It keeps the highly useful module alive, and makes "ownership" more > obvious - whoever owns distutils also gets this baggage 
                              
                              Sounds good, but currently I would like to concentrate an technical rather than administrative details. The following are the ideas: 1. Modulefinder does not handle cases where packages export names referring to functions or variables, rather than modules. Maybe the scan_code method, which looks for IMPORT opcode, could be extended to handle STORE_NAME opcodes which are not preceeded by IMPORT opcodes. 2. Modulefinder uses imp.find_module to find modules, and partly catches ImportErrors. imp.find_module can also raise NameErrors on windows, if the case does not fit. They should be catched. 3. Weird idea (?): Modulefinder could try instead of only scanning the opcodes to actually _import_ modules (at least extension modules, otherwise it will not find _any_ dependencies). Thomas From fdrake at users.sourceforge.net Mon Feb 19 17:50:52 2001 From: fdrake at users.sourceforge.net (Fred L. Drake) Date: Mon, 19 Feb 2001 08:50:52 -0800 Subject: [Python-Dev] [development doc updates] Message-ID: 
                              
                              The development version of the documentation has been updated: http://python.sourceforge.net/devel-docs/ From jeremy at alum.mit.edu Mon Feb 19 21:18:03 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 19 Feb 2001 15:18:03 -0500 (EST) Subject: [Python-Dev] Windows/Cygwin/MacOSX import (was RE: python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: 
                              
                              References: 
                              
                              
                              Message-ID: <14993.32635.85544.343209@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TP" == Tim Peters 
                              
                              writes: TP> [Michael Hudson] >> ... >> * Imports on case-insensitive file systems * >> >> There was quite some discussion about how to handle imports on a >> case-insensitive file system (eg. on Windows). I didn't follow >> the details, but Tim Peters is on the case (sorry), so I'm >> confident it will get sorted out. TP> You can be sure the whitespace will be consistent, anyway TP> 
                              
                              . TP> OK, this one sucks. It should really have gotten a PEP, but it TP> cropped up too late in the release cycle and it can't be delayed TP> (see below). It would be good to capture this in an informational PEP that just describes what the policy is and why. If nothing else, it could be a copy of Tim's message immortalized with a PEP number. Jeremy From moshez at zadka.site.co.il Tue Feb 20 06:43:41 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Tue, 20 Feb 2001 07:43:41 +0200 (IST) Subject: [Python-Dev] Demos are out of Data: Requesting Permission to Change Message-ID: <20010220054341.C4A93A840@darjeeling.zadka.site.co.il> Random example: Demo/scripts/pi.py: # Use int(d) to avoid a trailing L after each digit Would anyone have a problem if I just went and checked in updates to the demos? Putting it as a patch on SF seems like needless beuracracy. -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez at debian.org |http://www.{python,debian,gnu}.org From MarkH at ActiveState.com Tue Feb 20 13:12:23 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Tue, 20 Feb 2001 23:12:23 +1100 Subject: [Python-Dev] Those import related syntax errors again... Message-ID: 
                              
                              Hi all, I'm a little confused by the following exception: File "f:\src\python-cvs\xpcom\server\policy.py", line 18, in ? from xpcom import xpcom_consts, _xpcom, client, nsError, ServerException, COMException exceptions.SyntaxError: BuildInterfaceInfo: exec or 'import *' makes names ambiguous in nested scope (__init__.py, line 71) This sounds alot like Tim's question on this a while ago, and from all accounts this had been resolved (http://mail.python.org/pipermail/python-dev/2001-February/012456.html) In that mail, Jeremy writes: -- quote -- > from Percolator import Percolator > > in a class definition. That smells like a bug, not a debatable design > choice. Percolator has "from x import *" code. This is what is causing the exception. I think it has already been fixed in CVS though, so should work again. -- end quote -- However, Tim replied saying that it still didn't work for him. There was never a followup saying "it does now". In this case, the module being imported from does _not_ use "from module import *" at all, but is a parent package. The only name referenced by the __init__ function is "ServerException", and that is a simple class. The only "import *" I can track is via the name "client", which is itself a package and does the "import *" about 3 modules deep. Any clues? Thanks, Mark. From thomas at xs4all.net Tue Feb 20 13:30:45 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 20 Feb 2001 13:30:45 +0100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: 
                              
                              ; from MarkH@ActiveState.com on Tue, Feb 20, 2001 at 11:12:23PM +1100 References: 
                              
                              Message-ID: <20010220133045.C13911@xs4all.nl> On Tue, Feb 20, 2001 at 11:12:23PM +1100, Mark Hammond wrote: > Hi all, > I'm a little confused by the following exception: > File "f:\src\python-cvs\xpcom\server\policy.py", line 18, in ? > from xpcom import xpcom_consts, _xpcom, client, nsError, > ServerException, COMException > exceptions.SyntaxError: BuildInterfaceInfo: exec or 'import *' makes names > ambiguous in nested scope (__init__.py, line 71) [ However, no 'from foo import *' to be found, except at module level ] > Any clues? I don't have the xpcom package, so I can't check myself, but have you considered 'exec' as well as 'from foo import *' ? -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From MarkH at ActiveState.com Tue Feb 20 13:42:09 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Tue, 20 Feb 2001 23:42:09 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010220133045.C13911@xs4all.nl> Message-ID: 
                              
                              [Thomas] > I don't have the xpcom package, so I can't check myself, As of the last 24 hours, it sits in the Mozilla CVS tree - extensions/python/xpcom :) > but have you considered 'exec' as well as 'from foo import *' ? exec appears exactly once, in a function in the "client" sub-package. Mark. From jeremy at alum.mit.edu Tue Feb 20 15:48:41 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 20 Feb 2001 09:48:41 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: 
                              
                              References: <20010220133045.C13911@xs4all.nl> 
                              
                              Message-ID: <14994.33737.132255.466570@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "MH" == Mark Hammond 
                              
                              writes: MH> [Thomas] >> I don't have the xpcom package, so I can't check myself, MH> As of the last 24 hours, it sits in the Mozilla CVS tree - MH> extensions/python/xpcom :) Don't know where to find that :-) >> but have you considered 'exec' as well as 'from foo import *' ? MH> exec appears exactly once, in a function in the "client" MH> sub-package. Does the function that contains the exec also contain another function or lambda? If it does and the contained function has references to non-local variables, the compiler will complain. The exception should include the line number of the first line of the function body that contains the import * or exec. Jeremy From guido at digicool.com Tue Feb 20 16:03:59 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 20 Feb 2001 10:03:59 -0500 Subject: [Python-Dev] Demos are out of Date: Requesting Permission to Change In-Reply-To: Your message of "Tue, 20 Feb 2001 07:43:41 +0200." <20010220054341.C4A93A840@darjeeling.zadka.site.co.il> References: <20010220054341.C4A93A840@darjeeling.zadka.site.co.il> Message-ID: <200102201503.KAA28281@cj20424-a.reston1.va.home.com> > Random example: > > Demo/scripts/pi.py: > # Use int(d) to avoid a trailing L after each digit > > Would anyone have a problem if I just went and checked in updates > to the demos? Putting it as a patch on SF seems like needless beuracracy. Sure, go ahead. I've fixed your subject: I stared puzzledly at "Demos are out of Data" for quite a while before I realized you meant out of date! --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at digicool.com Tue Feb 20 17:05:15 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 11:05:15 -0500 Subject: [Python-Dev] Demo/embed/import.c References: <20010219112022.9721F371690@snelboot.oratrix.nl> Message-ID: <14994.38331.347106.734329@anthem.wooz.org> >>>>> "JJ" == Jack Jansen 
                              
                              writes: JJ> Can I request that the new file Demo/embed/import.c be JJ> renamed? The name clashes with the import.c we all know and JJ> love, and the setup of things under CodeWarrior on the Mac is JJ> such that it will search for sourcefiles recursively from the JJ> root of the Python sourcefolder. JJ> I can fix this, of course, but it's a lot of work... I'll fix this, but I'm not going to preserve the CVS history. 1) the file is too new to have any significant history, 2) doing the repository dance on SF sucks. I'll call the file importexc.c since it imports exceptions. -Barry From barry at digicool.com Tue Feb 20 18:49:49 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 12:49:49 -0500 Subject: [Python-Dev] Demo/embed/import.c References: <20010219112022.9721F371690@snelboot.oratrix.nl> <14994.38331.347106.734329@anthem.wooz.org> Message-ID: <14994.44605.599157.471020@anthem.wooz.org> >>>>> "BAW" == Barry A Warsaw 
                              
                              writes: BAW> I'll fix this, but I'm not going to preserve the CVS history. BAW> 1) the file is too new to have any significant history, 2) BAW> doing the repository dance on SF sucks. BAW> I'll call the file importexc.c since it imports exceptions. I fixed this, but some of the programs now core dump. I need to cvs update and rebuild everything and then figure out why it's coring. Then I'll check things in. -Barry From barry at digicool.com Tue Feb 20 21:22:32 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 15:22:32 -0500 Subject: [Python-Dev] Update to PEP 232 Message-ID: <14994.53768.767065.272158@anthem.wooz.org> After some internal discussions amongst the Pythonlabbers, we've had to make some updates to PEP 232, Function Attributes. Attached is the complete current PEP draft, also available at http://python.sourceforge.net/peps/pep-0232.html The PEP has been moved back to Draft status, but will be Accepted and Finalized for Python 2.1. It will also be propagated forward for Python 2.2 for the next step in implementation. -Barry -------------------- snip snip -------------------- PEP: 232 Title: Function Attributes Version: $Revision: 1.6 $ Author: barry at digicool.com (Barry A. Warsaw) Status: Draft Type: Standards Track Created: 02-Dec-2000 Python-Version: 2.1 / 2.2 Post-History: 20-Feb-2001 Introduction This PEP describes an extension to Python, adding attribute dictionaries to functions and methods. This PEP tracks the status and ownership of this feature. It contains a description of the feature and outlines changes necessary to support the feature. This PEP summarizes discussions held in mailing list forums, and provides URLs for further information, where appropriate. The CVS revision history of this file contains the definitive historical record. Background Functions already have a number of attributes, some of which are writable, e.g. func_doc, a.k.a. func.__doc__. func_doc has the interesting property that there is special syntax in function (and method) definitions for implicitly setting the attribute. This convenience has been exploited over and over again, overloading docstrings with additional semantics. For example, John Aycock has written a system where docstrings are used to define parsing rules[1]. Zope's ZPublisher ORB[2] uses docstrings to signal "publishable" methods, i.e. methods that can be called through the web. And Tim Peters has developed a system called doctest[3], where docstrings actually contain unit tests. The problem with this approach is that the overloaded semantics may conflict with each other. For example, if we wanted to add a doctest unit test to a Zope method that should not be publishable through the web. Proposal This proposal adds a new dictionary to function objects, called func_dict (a.k.a. __dict__). This dictionary can be set and get using ordinary attribute set and get syntax. Methods also gain `getter' syntax, and they currently access the attribute through the dictionary of the underlying function object. It is not possible to set attributes on bound or unbound methods, except by doing so explicitly on the underlying function object. See the `Future Directions' discussion below for approaches in subsequent versions of Python. A function object's __dict__ can also be set, but only to a dictionary object (i.e. setting __dict__ to UserDict raises a TypeError). Examples Here are some examples of what you can do with this feature. def a(): pass a.publish = 1 a.unittest = '''...''' if a.publish: print a() if hasattr(a, 'unittest'): testframework.execute(a.unittest) class C: def a(self): 'just a docstring' a.publish = 1 c = C() if c.a.publish: publish(c.a()) Other Uses Paul Prescod enumerated a bunch of other uses: http://mail.python.org/pipermail/python-dev/2000-April/003364.html Future Directions - A previous version of this PEP (and the accompanying implementation) allowed for both setter and getter of attributes on unbound methods, and only getter on bound methods. A number of problems were discovered with this policy. Because method attributes were stored in the underlying function, this caused several potentially surprising results: class C: def a(self): pass c1 = C() c2 = C() c1.a.publish = 1 # c2.a.publish would now be == 1 also! Because a change to `a' bound c1 also caused a change to `a' bound to c2, setting of attributes on bound methods was disallowed. However, even allowing setting of attributes on unbound methods has its ambiguities: class D(C): pass class E(C): pass D.a.publish = 1 # E.a.publish would now be == 1 also! For this reason, the current PEP disallows setting attributes on either bound or unbound methods, but does allow for getting attributes on either -- both return the attribute value on the underlying function object. The proposal for Python 2.2 is to implement setting (bound or unbound) method attributes by setting attributes on the instance or class, using special naming conventions. I.e. class C: def a(self): pass C.a.publish = 1 C.__a_publish__ == 1 # true c = C() c.a.publish = 2 c.__a_publish__ == 2 # true d = C() d.__a_publish__ == 1 # true Here, a lookup on the instance would look to the instance's dictionary first, followed by a lookup on the class's dictionary, and finally a lookup on the function object's dictionary. - Currently, Python supports function attributes only on Python functions (i.e. those that are written in Python, not those that are built-in). Should it be worthwhile, a separate patch can be crafted that will add function attributes to built-ins. - __doc__ is the only function attribute that currently has syntactic support for conveniently setting. It may be worthwhile to eventually enhance the language for supporting easy function attribute setting. Here are some syntaxes suggested by PEP reviewers: def a { 'publish' : 1, 'unittest': '''...''', } (args): # ... def a(args): """The usual docstring.""" {'publish' : 1, 'unittest': '''...''', # etc. } It isn't currently clear if special syntax is necessary or desirable. Dissenting Opinion When this was discussed on the python-dev mailing list in April 2000, a number of dissenting opinions were voiced. For completeness, the discussion thread starts here: http://mail.python.org/pipermail/python-dev/2000-April/003361.html The dissenting arguments appear to fall under the following categories: - no clear purpose (what does it buy you?) - other ways to do it (e.g. mappings as class attributes) - useless until syntactic support is included Countering some of these arguments is the observation that with vanilla Python 2.0, __doc__ can in fact be set to any type of object, so some semblance of writable function attributes are already feasible. But that approach is yet another corruption of __doc__. And while it is of course possible to add mappings to class objects (or in the case of function attributes, to the function's module), it is more difficult and less obvious how to extract the attribute values for inspection. Finally, it may be desirable to add syntactic support, much the same way that __doc__ syntactic support exists. This can be considered separately from the ability to actually set and get function attributes. Reference Implementation The reference implementation is available on SourceForge as a patch against the Python CVS tree (patch #103123). This patch doesn't include the regrtest module and output file. Those are available upon request. http://sourceforge.net/patch/?func=detailpatch&patch_id=103123&group_id=5470 This patch has been applied and will become part of Python 2.1. References [1] Aycock, "Compiling Little Languages in Python", http://www.foretec.com/python/workshops/1998-11/proceedings/papers/aycock-little/aycock-little.html [2] http://classic.zope.org:8080/Documentation/Reference/ORB [3] ftp://ftp.python.org/pub/python/contrib-09-Dec-1999/System/doctest.py Copyright This document has been placed in the Public Domain. Local Variables: mode: indented-text indent-tabs-mode: nil End: From barry at digicool.com Tue Feb 20 21:58:43 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 15:58:43 -0500 Subject: [Python-Dev] Embedding demos are broken Message-ID: <14994.55939.514084.356997@anthem.wooz.org> Something changed recently, and now the Demo/embed programs are broken, e.g. % ./loop pass 2 Could not find platform independent libraries 
                              
                              Could not find platform dependent libraries 
                              
                              Consider setting $PYTHONHOME to 
                              
                              [:
                              
                              ] 'import site' failed; use -v for traceback Segmentation fault (core dumped) The crash is happening in the second call to init_exceptions() (gdb) where #0 PyModule_GetDict (m=0x0) at Objects/moduleobject.c:40 #1 0x8075ea8 in init_exceptions () at Python/exceptions.c:1058 #2 0x8051880 in Py_Initialize () at Python/pythonrun.c:147 #3 0x80516db in main (argc=3, argv=0xbffffa34) at loop.c:28 because the attempt to import __builtin__ returns NULL. I don't have time right now to look any deeper, but I suspect that the crash may be due to changes in the semantics of PyImport_ImportModule() which now goes through __import__. I'm posting this in case someone with spare cycles can look at it. -Barry From guido at digicool.com Tue Feb 20 22:40:07 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 20 Feb 2001 16:40:07 -0500 Subject: [Python-Dev] Embedding demos are broken In-Reply-To: Your message of "Tue, 20 Feb 2001 15:58:43 EST." <14994.55939.514084.356997@anthem.wooz.org> References: <14994.55939.514084.356997@anthem.wooz.org> Message-ID: <200102202140.QAA06446@cj20424-a.reston1.va.home.com> > Something changed recently, and now the Demo/embed programs are > broken, e.g. > > % ./loop pass 2 > Could not find platform independent libraries 
                              
                              > Could not find platform dependent libraries 
                              
                              > Consider setting $PYTHONHOME to 
                              
                              [:
                              
                              ] > 'import site' failed; use -v for traceback > Segmentation fault (core dumped) > > The crash is happening in the second call to init_exceptions() > > (gdb) where > #0 PyModule_GetDict (m=0x0) at Objects/moduleobject.c:40 > #1 0x8075ea8 in init_exceptions () at Python/exceptions.c:1058 > #2 0x8051880 in Py_Initialize () at Python/pythonrun.c:147 > #3 0x80516db in main (argc=3, argv=0xbffffa34) at loop.c:28 > > because the attempt to import __builtin__ returns NULL. I don't have > time right now to look any deeper, but I suspect that the crash may be > due to changes in the semantics of PyImport_ImportModule() which now > goes through __import__. > > I'm posting this in case someone with spare cycles can look at it. > > -Barry This was probably broken since PyImport_Import() was introduced in 1997! The code in PyImport_Import() tried to save itself a bit of work and save the __builtin__ module in a static variable. But this doesn't work across Py_Finalise()/Py_Initialize()! It also doesn't work when using multiple interpreter states created with PyInterpreterState_New(). So I'm ripping out this code. Looks like it's passing the test suite so I'm checking in the patch. It looks like we need a much more serious test suite for multiple interpreters and repeatedly initializing! --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at digicool.com Tue Feb 20 22:55:58 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 16:55:58 -0500 Subject: [Python-Dev] Embedding demos are broken References: <14994.55939.514084.356997@anthem.wooz.org> <200102202140.QAA06446@cj20424-a.reston1.va.home.com> Message-ID: <14994.59374.979694.249817@anthem.wooz.org> >>>>> "GvR" == Guido van Rossum 
                              
                              writes: GvR> This was probably broken since PyImport_Import() was GvR> introduced in 1997! Odd. It all worked the last time I touched those files a couple of weeks ago (when I was testing those progs against Insure). I'll do a CVS update and check again. Thanks. -Barry From guido at digicool.com Tue Feb 20 23:03:46 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 20 Feb 2001 17:03:46 -0500 Subject: [Python-Dev] Embedding demos are broken In-Reply-To: Your message of "Tue, 20 Feb 2001 16:55:58 EST." <14994.59374.979694.249817@anthem.wooz.org> References: <14994.55939.514084.356997@anthem.wooz.org> <200102202140.QAA06446@cj20424-a.reston1.va.home.com> <14994.59374.979694.249817@anthem.wooz.org> Message-ID: <200102202203.RAA06667@cj20424-a.reston1.va.home.com> > >>>>> "GvR" == Guido van Rossum 
                              
                              writes: > > GvR> This was probably broken since PyImport_Import() was > GvR> introduced in 1997! > > Odd. It all worked the last time I touched those files a couple of > weeks ago (when I was testing those progs against Insure). That's because then PyImport_ImportModule() wasn't synonymous with PyImport_Import(). > I'll do a CVS update and check again. Thanks. I'm sure it'll work. --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at digicool.com Tue Feb 20 23:11:57 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 17:11:57 -0500 Subject: [Python-Dev] Embedding demos are broken References: <14994.55939.514084.356997@anthem.wooz.org> <200102202140.QAA06446@cj20424-a.reston1.va.home.com> <14994.59374.979694.249817@anthem.wooz.org> Message-ID: <14994.60333.915783.456876@anthem.wooz.org> >>>>> "BAW" == Barry A Warsaw 
                              
                              writes: BAW> I'll do a CVS update and check again. Thanks. Works now, thanks. From MarkH at ActiveState.com Tue Feb 20 23:44:28 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Wed, 21 Feb 2001 09:44:28 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14994.33737.132255.466570@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              > MH> As of the last 24 hours, it sits in the Mozilla CVS tree - > MH> extensions/python/xpcom :) > > Don't know where to find that :-) I could tell you if you like :) > >> but have you considered 'exec' as well as 'from foo import *' ? > > MH> exec appears exactly once, in a function in the "client" > MH> sub-package. > > Does the function that contains the exec also contain another function > or lambda? If it does and the contained function has references to > non-local variables, the compiler will complain. It appears this is the problem. The fact that only "__init__.py" was listed threw me - I have a few of them :) *sigh* - this is a real shame. IMO, we can't continue to break existing code, even if it is good for me! People are going to get mighty annoyed - I am. And if people on python-dev struggle with some of the new errors, the poor normal users are going to feel even more alienated. Mark. From guido at digicool.com Tue Feb 20 23:54:54 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 20 Feb 2001 17:54:54 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 09:44:28 +1100." 
                              
                              References: 
                              
                              Message-ID: <200102202254.RAA07487@cj20424-a.reston1.va.home.com> > > Does the function that contains the exec also contain another function > > or lambda? If it does and the contained function has references to > > non-local variables, the compiler will complain. > > It appears this is the problem. The fact that only "__init__.py" was listed > threw me - I have a few of them :) > > *sigh* - this is a real shame. IMO, we can't continue to break existing > code, even if it is good for me! People are going to get mighty annoyed - I > am. And if people on python-dev struggle with some of the new errors, the > poor normal users are going to feel even more alienated. Sigh indeed. We could narrow it down to only raise the error if there are nested functions or lambdas that don't reference free variables, but unfortunately most of them will reference at least some builtin e.g. str()... How about the old fallback to using straight dict lookups when this combination of features is detected? --Guido van Rossum (home page: http://www.python.org/~guido/) From pedroni at inf.ethz.ch Wed Feb 21 02:22:38 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 02:22:38 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: 
                              
                              <200102202254.RAA07487@cj20424-a.reston1.va.home.com> Message-ID: <006501c09ba4$c84857e0$605821c0@newmexico> Hello. > > > Does the function that contains the exec also contain another function > > > or lambda? If it does and the contained function has references to > > > non-local variables, the compiler will complain. > > > > It appears this is the problem. The fact that only "__init__.py" was listed > > threw me - I have a few of them :) > > > > *sigh* - this is a real shame. IMO, we can't continue to break existing > > code, even if it is good for me! People are going to get mighty annoyed - I > > am. And if people on python-dev struggle with some of the new errors, the > > poor normal users are going to feel even more alienated. > > Sigh indeed. We could narrow it down to only raise the error if there > are nested functions or lambdas that don't reference free variables, > but unfortunately most of them will reference at least some builtin > e.g. str()... > > How about the old fallback to using straight dict lookups when this > combination of features is detected? I'm posting an opinion on this subject because I'm implementing nested scopes in jython. It seems that we really should avoid breaking code using import * and exec, and to obtain this - I agree - the only way is to fall back to some straight dictionary lookup, when both import or exec and nested scopes are there But doing this AFAIK related to actual python nested scope impl and what I'm doing on jython side is quite messy, because we will need to keep around "chained" closures as entire dictionaries, because we don't know if an exec or import will hide some variable from an outer level, or add a new variable that then cannot be interpreted as a global one in nested scopes. This is IMO too much heavyweight. Another way is to use special rules (similar to those for class defs), e.g. having 
                              
                              y=3 def f(): exec "y=2" def g(): return y return g() print f()
                               # print 3. Is that confusing for users? maybe they will more naturally expect 2 as outcome (given nested scopes). The last possibility (but I know this one has been somehow discarded) is to have scoping only if explicitly declared; I imagine something like 
                              
                              y=3 def f(): let y exec "y=2" def g(): return y return g() print f()
                               # print 2. Issues with this: - with implicit scoping we naturally obtain that nested func defs can call themself recursively: * we can require a let for this too * we can introduce "horrible" things like 'defrec' or 'deflet' * we can have def imply a let: this breaks def get_str(): def str(v): return "str: "+str(v) return str but nested scopes as actually implemented already break that. - with this approach inner scopes can change the value of outer scope vars: this was considered a non-feature... - what's the gain with this approach? if we consider code like this: def f(str): # eg str = "y=z" from foo import * def g(): exec str return y return g without explicit 'let' decls if we want to compile this and not just say "you can't do that" the closure of g should be constructed out of the entire runtime namespace of f. With explicit 'let's in this case we would produce just the old code and semantic. If some 'let' would be added to f, we would know what part of the namespace of f should be used to construct the closure of g. In absence of import* and exec we could use the current fast approach to implement nested scopes, if they are there we would know what vars should be stored in cells and passed down to inner scopes. [We could have special locals dicts that can contain direct values or cells, and that would do the right indirect get and set for the cell-case. These dict could also be possibly returned by "locals()" and that would be the way to implement exec "spam", just equivalently as exec "spam" in globals(),locals(). import * would have just the assignement semantic. ] Very likely I'm missing something, but from my "external" viewpoint I would have preferred such solution. IMO maybe it would be good to think about this, because differently as expected implicit scoping has consequences that we would better avoid. Is too late for that (having feature freeze)? regards, Samuele Pedroni. From skip at mojam.com Wed Feb 21 03:00:42 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 20 Feb 2001 20:00:42 -0600 (CST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102202254.RAA07487@cj20424-a.reston1.va.home.com> References: 
                              
                              <200102202254.RAA07487@cj20424-a.reston1.va.home.com> Message-ID: <14995.8522.253084.230222@beluga.mojam.com> Guido> Sigh indeed.... Guido> How about the old fallback to using straight dict lookups when Guido> this combination of features is detected? This probably won't be a very popular suggestion, but how about pulling nested scopes (I assume they are at the root of the problem) until this can be solved cleanly? Skip From guido at digicool.com Wed Feb 21 03:53:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 20 Feb 2001 21:53:03 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 02:22:38 +0100." <006501c09ba4$c84857e0$605821c0@newmexico> References: 
                              
                              <200102202254.RAA07487@cj20424-a.reston1.va.home.com> <006501c09ba4$c84857e0$605821c0@newmexico> Message-ID: <200102210253.VAA08462@cj20424-a.reston1.va.home.com> > > How about the old fallback to using straight dict lookups when this > > combination of features is detected? > > I'm posting an opinion on this subject because I'm implementing > nested scopes in jython. > > It seems that we really should avoid breaking code using import * > and exec, and to obtain this - I agree - the only way is to fall > back to some straight dictionary lookup, when both import or exec > and nested scopes are there > > But doing this AFAIK related to actual python nested scope impl and > what I'm doing on jython side is quite messy, because we will need > to keep around "chained" closures as entire dictionaries, because we > don't know if an exec or import will hide some variable from an > outer level, or add a new variable that then cannot be interpreted > as a global one in nested scopes. This is IMO too much heavyweight. > > Another way is to use special rules > (similar to those for class defs), e.g. having > > 
                              
                              > y=3 > def f(): > exec "y=2" > def g(): > return y > return g() > > print f() >
                               > > # print 3. > > Is that confusing for users? maybe they will more naturally expect 2 > as outcome (given nested scopes). This seems the best compromise to me. It will lead to the least broken code, because this is the behavior that we had before nested scopes! It is also quite easy to implement given the current implementation, I believe. Maybe we could introduce a warning rather than an error for this situation though, because even if this behavior is clearly documented, it will still be confusing to some, so it is better if we outlaw it in some future version. --Guido van Rossum (home page: http://www.python.org/~guido/) From MarkH at ActiveState.com Wed Feb 21 03:58:18 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Wed, 21 Feb 2001 13:58:18 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14995.8522.253084.230222@beluga.mojam.com> Message-ID: 
                              
                              > This probably won't be a very popular suggestion, but how about pulling > nested scopes (I assume they are at the root of the problem) > until this can be solved cleanly? Agreed. While I think nested scopes are kinda cool, I have lived without them, and really without missing them, for years. At the moment the cure appears worse then the symptoms in at least a few cases. If nothing else, it compromises the elegant simplicity of Python that drew me here in the first place! Assuming that people really _do_ want this feature, IMO the bar should be raised so there are _zero_ backward compatibility issues. Mark. From MarkH at ActiveState.com Wed Feb 21 04:08:01 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Wed, 21 Feb 2001 14:08:01 +1100 Subject: [Python-Dev] Modulefinder? In-Reply-To: <00a401c09a7a$6d2060e0$e000a8c0@thomasnotebook> Message-ID: 
                              
                              [Thomas H] > That's not what I wanted to hear ;-), but anyway, since you > answered, I assume you have something to do with it. I stuck my finger in it once :) > 1. Modulefinder does not handle cases where packages export names > referring to functions or variables, rather than modules. > Maybe the scan_code method, which looks for IMPORT opcode, > could be extended to handle STORE_NAME opcodes which are not > preceeded by IMPORT opcodes. > > 2. Modulefinder uses imp.find_module to find modules, and > partly catches ImportErrors. imp.find_module can also > raise NameErrors on windows, if the case does not fit. > They should be catched. They both sound fine to me. > 3. Weird idea (?): Modulefinder could try instead of only > scanning the opcodes to actually _import_ modules (at least > extension modules, otherwise it will not find _any_ dependencies). There was some reluctance to do this for freeze, and hence Modulefinder was born. I agree it may make sense in some cases to do this, but it shouldn't be a default action. Mark. From akuchlin at cnri.reston.va.us Wed Feb 21 04:29:36 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Tue, 20 Feb 2001 22:29:36 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: 
                              
                              ; from MarkH@ActiveState.com on Wed, Feb 21, 2001 at 01:58:18PM +1100 References: <14995.8522.253084.230222@beluga.mojam.com> 
                              
                              Message-ID: <20010220222936.A2477@newcnri.cnri.reston.va.us> On Wed, Feb 21, 2001 at 01:58:18PM +1100, Mark Hammond wrote: >Assuming that people really _do_ want this feature, IMO the bar should be >raised so there are _zero_ backward compatibility issues. Even at the cost of additional implementation complexity? At the cost of having to learn "scopes are nested, unless you do these two things in which case they're not"? Let's not waffle. If nested scopes are worth doing, they're worth breaking code. Either leave exec and from..import illegal, or back out nested scopes, or think of some better solution, but let's not introduce complicated backward compatibility hacks. --amk From MarkH at ActiveState.com Wed Feb 21 05:11:46 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Wed, 21 Feb 2001 15:11:46 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010220222936.A2477@newcnri.cnri.reston.va.us> Message-ID: 
                              
                              > Even at the cost of additional implementation complexity? I can only assume you are serious. IMO, absolutely! > Let's not waffle. Agreed. IMO we are starting to waffle the minute we ignore backwards compatibility. If a new feature can't be added without breaking code that was not previously documented as illegal, then IMO it is simply a non-starter until Py3k. Indeed, I seem to recall many worthwhile features being added to the Py3k bit-bucket for exactly that reason. Mark. From jeremy at alum.mit.edu Wed Feb 21 05:22:16 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 20 Feb 2001 23:22:16 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14995.8522.253084.230222@beluga.mojam.com> References: 
                              
                              <200102202254.RAA07487@cj20424-a.reston1.va.home.com> <14995.8522.253084.230222@beluga.mojam.com> Message-ID: <14995.17016.98294.378337@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro 
                              
                              writes: Guido> Sigh indeed.... It sounds like the real source of frusteration was the confusing error message. I'd rather fix the error message. Guido> How about the old fallback to using straight dict lookups Guido> when this combination of features is detected? Straight dict lookups isn't sufficient for most cases, because the question is one of whether to build a closure or not. def f(): from module import * def g(l): len(l) If len is not defined in f, then the compiler generates a LOAD_GLOBAL for len. If it is defined in f, then it creates a closure for g (MAKE_CLOSURE instead of MAKE_FUNCTION) generator a LOAD_DEREF for len. As far as I can tell, there's no trivial change that will make this work. SM> This probably won't be a very popular suggestion, but how about SM> pulling nested scopes (I assume they are at the root of the SM> problem) until this can be solved cleanly? Not popular with me <0.5 wink>, but only because I don't there this is a problem that can be "solved" cleanly. I think it's far from obvious what the code example above should do in the case where module defines the name len. Posters of c.l.py have suggested both alternatives as the logical choice: (1) import * is dynamic so the static scoping rule ignores the names it introduces, (2) Python is a late binding language so the name binding introduced by import * is used. Jeremy From jeremy at alum.mit.edu Wed Feb 21 05:24:40 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 20 Feb 2001 23:24:40 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010220222936.A2477@newcnri.cnri.reston.va.us> References: <14995.8522.253084.230222@beluga.mojam.com> 
                              
                              <20010220222936.A2477@newcnri.cnri.reston.va.us> Message-ID: <14995.17160.411136.109911@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "AMK" == Andrew Kuchling 
                              
                              writes: AMK> On Wed, Feb 21, 2001 at 01:58:18PM +1100, Mark Hammond wrote: >> Assuming that people really _do_ want this feature, IMO the bar >> should be raised so there are _zero_ backward compatibility >> issues. AMK> Even at the cost of additional implementation complexity? At AMK> the cost of having to learn "scopes are nested, unless you do AMK> these two things in which case they're not"? AMK> Let's not waffle. If nested scopes are worth doing, they're AMK> worth breaking code. Either leave exec and from..import AMK> illegal, or back out nested scopes, or think of some better AMK> solution, but let's not introduce complicated backward AMK> compatibility hacks. Well said. Jeremy From jeremy at alum.mit.edu Wed Feb 21 05:28:20 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 20 Feb 2001 23:28:20 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: 
                              
                              References: <14995.8522.253084.230222@beluga.mojam.com> 
                              
                              Message-ID: <14995.17380.172705.843973@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "MH" == Mark Hammond 
                              
                              writes: >> This probably won't be a very popular suggestion, but how about >> pulling nested scopes (I assume they are at the root of the >> problem) until this can be solved cleanly? MH> Agreed. While I think nested scopes are kinda cool, I have MH> lived without them, and really without missing them, for years. MH> At the moment the cure appears worse then the symptoms in at MH> least a few cases. If nothing else, it compromises the elegant MH> simplicity of Python that drew me here in the first place! Mark, I'll buy that you're suffering at the moment, but I'm not sure why. You have a lot of code that uses 'from ... import *' inside functions. If so, that's the source of the compatibility problem. If you had a tool that listed all the code that needed to be fixed and/or you got tracebacks that highlighted the offending line rather than some import, would you still be suffering? It sounds like the problem wouldn't be much harder then multi-argument append at that point. I also disagree strongly with the argument that nested scopes compromise the elegent simplicity of Python. Did you really look at Python and say, "None of those stinking scoping rules. Let me at it." 
                              
                              I think the new rules are different, but no more or less complex than the old ones. Jeremy From MarkH at ActiveState.com Wed Feb 21 06:27:44 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Wed, 21 Feb 2001 16:27:44 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14995.17380.172705.843973@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              [Jeremy] > I'll buy that you're suffering at the moment, but I'm not sure why. I apologize if I sounded antagonistic. > You have a lot of code that uses 'from ... import *' inside > functions. If so, that's the source of the compatibility problem. > If you had a tool that listed all the code that needed to be fixed > and/or you got tracebacks that highlighted the offending line rather > than some import, would you still be suffering? The point isn't about my suffering as such. The point is more that python-dev owns a tiny amount of the code out there, and I don't believe we should put Python's users through this. Sure - I would be happy to "upgrade" all the win32all code, no problem. I am also happy to live in the bleeding edge and take some pain that will cause. The issue is simply the user base, and giving Python a reputation of not being able to painlessly upgrade even dot revisions. > It sounds like the > problem wouldn't be much harder then multi-argument append at that > point. Yup. I changed my code in relative silence on that issue, but believe we should not have been so hasty. Now we have warnings, I believe that would have been handled slightly differently if done today. It also had existing documentation to back it. Further, I believe that issue has contributed to a "no painless upgrade" perception already existing in some people's minds. > I also disagree strongly with the argument that nested scopes > compromise the elegent simplicity of Python. Did you really look at > Python and say, "None of those stinking scoping rules. Let me at it." > 
                              
                              I think the new rules are different, but no more or less > complex than the old ones. exec and eval take 2 dicts - there were 2 namespaces. I certainly have missed nested scopes, but instead of "let me at it", I smiled at the elegance and simplicity it buys me. I also didn't have to worry about "namespace clashes", and obscure rules. I wrote code the way I saw fit at the time, and didn't have to think about scoping rules. Even if we ignore existing code breaking, it is almost certain I would have coded the function the same way, got the syntax error, tried to work out exactly what it was complaining about, and adjust my code accordingly. Python is generally bereft of such rules, and all the more attractive for it. So I am afraid my perception remains. That said, I am not against nested scopes as Itrust the judgement of people smarter than I. However, I am against code breakage that is somehow "good for me", and suspect many other Python users are too. Just-one-more-reason-why-I-aint-the-BDFL-
                              
                              ly, Mark. Mark. From thomas at xs4all.net Wed Feb 21 07:47:10 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 21 Feb 2001 07:47:10 +0100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010220222936.A2477@newcnri.cnri.reston.va.us>; from akuchlin@cnri.reston.va.us on Tue, Feb 20, 2001 at 10:29:36PM -0500 References: <14995.8522.253084.230222@beluga.mojam.com> 
                              
                              <20010220222936.A2477@newcnri.cnri.reston.va.us> Message-ID: <20010221074710.E13911@xs4all.nl> On Tue, Feb 20, 2001 at 10:29:36PM -0500, Andrew Kuchling wrote: > Let's not waffle. If nested scopes are worth doing, they're worth > breaking code. I'm sorry, but that's bull -- I mean, I disagree completely. Nested scopes *are* a nice feature, but if we can't do them without breaking code in weird ways, we shouldn't, or at least *not yet*. I am still uneasy by the restrictions seemingly created just to facilitate the implementation issues of nested scopes, but I could live with them if they had been generating warnings at least one release, preferably more. I'm probably more conservative than most people here, in that aspect, but I believe I am right in it ;) Consider the average Joe User attempting to upgrade. He has to decide whether any of his scripts suffer from the upgrade, and then has to figure out how to fix them. In a case like Mark had, he is very likely to just give up and not upgrade, cursing Python while he's doing it. Now consider a site admin (which I happen to be,) who has to make that decision for all the people on the site -- which can be tens of thousands of people. There is no way he is going to test all scripts, he is lucky to know who even *uses* Python. He can probably live with a clean error that is an obvious fix; that's part of upgrading. Some weird error that doesn't point to a fix, and a weird, inconsequential fix in the first place isn't going to make him confident in upgrading. Now consider a distribution maintainer, who has to make that decision for potentially millions, many of which are site maintainers. He is not a happy camper. I was annoyed by the socket.socket() change in 2.0, but at least we could pretend 1.6 was a real release and that there was a lot of advance warning. In this case, however, we had several instances of the 'bug' in the standard library itself, which a lot of people use as code examples. I have yet to see a book or tutorial that lists from-foo-import-* in a local scope as illegal, and I have yet to see *anything* that lists 'exec' (not 'in' something) in a local scope as illegal. Nevertheless, those two will seem to be breaking the code now. > Either leave exec and from..import illegal, or back > out nested scopes, or think of some better solution, but let's not > introduce complicated backward compatibility hacks. We already *have* complicated backward compatibility hacks, though they are masked as optimizations now. from-foo-import-* and exec are legal in a function scope as long as you don't have a nested scope that references a non-local name. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From pedroni at inf.ethz.ch Wed Feb 21 15:46:40 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 15:46:40 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211446.PAA07183@core.inf.ethz.ch> Hi. [Mark Hammond] > The point isn't about my suffering as such. The point is more that > python-dev owns a tiny amount of the code out there, and I don't believe we > should put Python's users through this. > > Sure - I would be happy to "upgrade" all the win32all code, no problem. I > am also happy to live in the bleeding edge and take some pain that will > cause. > > The issue is simply the user base, and giving Python a reputation of not > being able to painlessly upgrade even dot revisions. I agree with all this. [As I imagined explicit syntax did not catch up and would require lot of discussions.] [GvR] > > Another way is to use special rules > > (similar to those for class defs), e.g. having > > > > 
                              
                              > > y=3 > > def f(): > > exec "y=2" > > def g(): > > return y > > return g() > > > > print f() > >
                               > > > > # print 3. > > > > Is that confusing for users? maybe they will more naturally expect 2 > > as outcome (given nested scopes). > > This seems the best compromise to me. It will lead to the least > broken code, because this is the behavior that we had before nested > scopes! It is also quite easy to implement given the current > implementation, I believe. > > Maybe we could introduce a warning rather than an error for this > situation though, because even if this behavior is clearly documented, > it will still be confusing to some, so it is better if we outlaw it in > some future version. > Yes this can be easy to implement but more confusing situations can arise: 
                              
                              y=3 def f(): y=9 exec "y=2" def g(): return y return y,g() print f()
                               What should this print? the situation leads not to a canonical solution as class def scopes. or 
                              
                              def f(): from foo import * def g(): return y return g() print f()
                               [Mark Hammond] > > This probably won't be a very popular suggestion, but how about pulling > > nested scopes (I assume they are at the root of the problem) > > until this can be solved cleanly? > > Agreed. While I think nested scopes are kinda cool, I have lived without > them, and really without missing them, for years. At the moment the cure > appears worse then the symptoms in at least a few cases. If nothing else, > it compromises the elegant simplicity of Python that drew me here in the > first place! > > Assuming that people really _do_ want this feature, IMO the bar should be > raised so there are _zero_ backward compatibility issues. I don't say anything about pulling nested scopes (I don't think my opinion can change things in this respect) but I should insist that without explicit syntax IMO raising the bar has a too high impl cost (both performance and complexity) or creates confusion. [Andrew Kuchling] > >Assuming that people really _do_ want this feature, IMO the bar should be > >raised so there are _zero_ backward compatibility issues. > > Even at the cost of additional implementation complexity? At the cost > of having to learn "scopes are nested, unless you do these two things > in which case they're not"? > > Let's not waffle. If nested scopes are worth doing, they're worth > breaking code. Either leave exec and from..import illegal, or back > out nested scopes, or think of some better solution, but let's not > introduce complicated backward compatibility hacks. IMO breaking code would be ok if we issue warnings today and implement nested scopes issuing errors tomorrow. But this is simply a statement about principles and raised impression. IMO import * in an inner scope should end up being an error, not sure about 'exec's. We will need a final BDFL statement. regards, Samuele Pedroni. From fredrik at pythonware.com Wed Feb 21 08:48:51 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 21 Feb 2001 08:48:51 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: 
                              
                              Message-ID: <019001c09bda$ffb6f4d0$e46940d5@hagrid> mark wrote: > Agreed. While I think nested scopes are kinda cool, I have lived without > them, and really without missing them, for years. in addition, it breaks existing code, all existing books, and several tools. doesn't sound like it really belongs in a X.1 release... maybe it should be ifdef'ed out, and not switched on by default until we reach 3.0? Cheers /F From jeremy at alum.mit.edu Wed Feb 21 15:56:40 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 09:56:40 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221074710.E13911@xs4all.nl> References: <14995.8522.253084.230222@beluga.mojam.com> 
                              
                              <20010220222936.A2477@newcnri.cnri.reston.va.us> <20010221074710.E13911@xs4all.nl> Message-ID: <14995.55080.928806.56317@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TW" == Thomas Wouters 
                              
                              writes: TW> On Tue, Feb 20, 2001 at 10:29:36PM -0500, Andrew Kuchling wrote: >> Let's not waffle. If nested scopes are worth doing, they're >> worth breaking code. TW> I'm sorry, but that's bull -- I mean, I disagree TW> completely. Nested scopes *are* a nice feature, but if we can't TW> do them without breaking code in weird ways, we shouldn't, or at TW> least *not yet*. I am still uneasy by the restrictions seemingly TW> created just to facilitate the implementation issues of nested TW> scopes, but I could live with them if they had been generating TW> warnings at least one release, preferably more. A note of clarification seems important here: The restrictions are not being introduced to simplify the implementation. They're being introduced because there is no sensible meaning for code that uses import * and nested scopes with free variables. There are two possible meanings, each plausible and neither satisfying. Jeremy From jeremy at alum.mit.edu Wed Feb 21 16:01:07 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 10:01:07 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <019001c09bda$ffb6f4d0$e46940d5@hagrid> References: 
                              
                              <019001c09bda$ffb6f4d0$e46940d5@hagrid> Message-ID: <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "FL" == Fredrik Lundh 
                              
                              writes: FL> doesn't sound like it really belongs in a X.1 release... So if we called the next release Python 3.0, it would be okay? it's-only-for-marketing-reasons-that-we-have-2.0-ly y'rs, Jeremy From jack at oratrix.nl Wed Feb 21 16:06:34 2001 From: jack at oratrix.nl (Jack Jansen) Date: Wed, 21 Feb 2001 16:06:34 +0100 Subject: [Python-Dev] Strange import behaviour, recently introduced Message-ID: <20010221150634.AB6ED371690@snelboot.oratrix.nl> I'm running into strange problems with import in frozen Mac programs. On the Mac a program is frozen in a rather different way from how it happens on Unix/Windows: basically all .pyc files are stuffed into resources, and if the import code comes across a file on sys.path it will look for PYC resources in that file. So, you freeze a program by stuffing all your modules into the interpreter executable as PYC resources and setting sys.path to contain only the executable file, basically. This week I noticed that these resource imports have suddenly become very very slow. Whereas startup time of my application used to be around 2 seconds (where the non-frozen version took 6 seconds) it now takes almost 20 times as long. The non-frozen version still takes 6 seconds. I suspect this may have something to do with recent mods to the import code, but attempts to pinpoint the problem have failed so far (somehow the profiler crashes my app). I've put a breakpoint at import.c:check_case(), and it isn't hit (as is to be expected), so that isn't the problem. Does anyone have a hint for where I could start looking? -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | ++++ see http://www.xs4all.nl/~tank/ ++++ From pedroni at inf.ethz.ch Wed Feb 21 16:10:26 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 16:10:26 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211510.QAA07814@core.inf.ethz.ch> This is becoming too much politics. > > >>>>> "TW" == Thomas Wouters 
                              
                              writes: > > TW> On Tue, Feb 20, 2001 at 10:29:36PM -0500, Andrew Kuchling wrote: > >> Let's not waffle. If nested scopes are worth doing, they're > >> worth breaking code. > > TW> I'm sorry, but that's bull -- I mean, I disagree > TW> completely. Nested scopes *are* a nice feature, but if we can't > TW> do them without breaking code in weird ways, we shouldn't, or at > TW> least *not yet*. I am still uneasy by the restrictions seemingly > TW> created just to facilitate the implementation issues of nested > TW> scopes, but I could live with them if they had been generating > TW> warnings at least one release, preferably more. > > A note of clarification seems important here: The restrictions are > not being introduced to simplify the implementation. They're being > introduced because there is no sensible meaning for code that uses > import * and nested scopes with free variables. There are two > possible meanings, each plausible and neither satisfying. > I think that y=3 def f(): exec "y=2" def g() return y return g() with f() returning 2 would make sense (given python dynamic nature). But it is not clear if we can reach consensus on the this or another semantic. (Implementing this would be ugly, but this is not the point). On the other hand just saying that new feature X make code Y (previously valid) meaningless and so the unique solution is to discard Y as garbage, is something that cannot be sold for cheap. I have the feeling that this is the *point*. regards, Samuele Pedroni. From tony at lsl.co.uk Wed Feb 21 11:06:34 2001 From: tony at lsl.co.uk (Tony J Ibbs (Tibs)) Date: Wed, 21 Feb 2001 10:06:34 -0000 Subject: [Python-Dev] RE: Update to PEP 232 In-Reply-To: <14994.53768.767065.272158@anthem.wooz.org> Message-ID: <000901c09bed$f861d750$f05aa8c0@lslp7o.int.lsl.co.uk> Small pedantry (there's another sort?) I note that: > - __doc__ is the only function attribute that currently has > syntactic support for conveniently setting. It may be > worthwhile to eventually enhance the language for supporting > easy function attribute setting. Here are some syntaxes > suggested by PEP reviewers: [...elided to save space!...] > It isn't currently clear if special syntax is necessary or > desirable. has not been changed since the last version of the PEP. I suggest that it be updated in two ways: 1. Clarify the final statement - I seem to have the impression (sorry, can't find a message to back it up) that either the BDFL or Tim Peters is very against anything other than the "simple" #f.a = 1# sort of thing - unless I'm mischannelling (?) again. 2. Reference the thread/idea a little while back that ended with #def f(a,b) having (publish=1)# - it's certainly no *worse* than the proposals in the PEP! (Michael Hudson got as far as a patch, I think). Tibs -- Tony J Ibbs (Tibs) http://www.tibsnjoan.co.uk/ then-again-i-confuse-easily
                              
                              -ly y'rs - tim That's true -- I usually feel confused after reading one of your posts. - Aahz My views! Mine! Mine! (Unless Laser-Scan ask nicely to borrow them.) From pedroni at inf.ethz.ch Wed Feb 21 14:04:26 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 14:04:26 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211304.OAA29179@core.inf.ethz.ch> Hi. [As I imagined explicit syntax did not catch up and would require lot of discussions.] [GvR] > > Another way is to use special rules > > (similar to those for class defs), e.g. having > > > > 
                              
                              > > y=3 > > def f(): > > exec "y=2" > > def g(): > > return y > > return g() > > > > print f() > >
                               > > > > # print 3. > > > > Is that confusing for users? maybe they will more naturally expect 2 > > as outcome (given nested scopes). > > This seems the best compromise to me. It will lead to the least > broken code, because this is the behavior that we had before nested > scopes! It is also quite easy to implement given the current > implementation, I believe. > > Maybe we could introduce a warning rather than an error for this > situation though, because even if this behavior is clearly documented, > it will still be confusing to some, so it is better if we outlaw it in > some future version. > Yes this can be easy to implement but more confusing situations can arise: 
                              
                              y=3 def f(): y=9 exec "y=2" def g(): return y return y,g() print f()
                               What should this print? the situation leads not to a canonical solution as class def scopes. or 
                              
                              def f(): from foo import * def g(): return y return g() print f()
                               [Mark Hammond] > > This probably won't be a very popular suggestion, but how about pulling > > nested scopes (I assume they are at the root of the problem) > > until this can be solved cleanly? > > Agreed. While I think nested scopes are kinda cool, I have lived without > them, and really without missing them, for years. At the moment the cure > appears worse then the symptoms in at least a few cases. If nothing else, > it compromises the elegant simplicity of Python that drew me here in the > first place! > > Assuming that people really _do_ want this feature, IMO the bar should be > raised so there are _zero_ backward compatibility issues. I don't say anything about pulling nested scopes (I don't think my opinion can change things in this respect) but I should insist that without explicit syntax IMO raising the bar has a too high impl cost (both performance and complexity) or creates confusion. [Andrew Kuchling] > >Assuming that people really _do_ want this feature, IMO the bar should be > >raised so there are _zero_ backward compatibility issues. > > Even at the cost of additional implementation complexity? At the cost > of having to learn "scopes are nested, unless you do these two things > in which case they're not"? > > Let's not waffle. If nested scopes are worth doing, they're worth > breaking code. Either leave exec and from..import illegal, or back > out nested scopes, or think of some better solution, but let's not > introduce complicated backward compatibility hacks. IMO breaking code would be ok if we issue warnings today and implement nested scopes issuing errors tomorrow. But this is simply a statement about principles and raised impression. IMO import * in an inner scope should end up being an error, not sure about 'exec's. We should hear Jeremy H. position and we will need a final BDFL statement. regards, Samuele Pedroni. From skip at mojam.com Wed Feb 21 14:46:27 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 21 Feb 2001 07:46:27 -0600 (CST) Subject: [Python-Dev] I think it's time to give import * the heave ho Message-ID: <14995.50867.445071.218779@beluga.mojam.com> Jeremy> Posters of c.l.py have suggested both alternatives as the Jeremy> logical choice: (1) import * is dynamic so the static scoping Jeremy> rule ignores the names it introduces, Bad alternative. import * works just fine today and is very mature, well understood functionality. This would introduce a special case that is going to confuse people. Jeremy> (2) Python is a late binding language so the name binding Jeremy> introduced by import * is used. This has to be the only reasonable alternative. Nonetheless, as mature and well understood as import * is, the fact that it can import a variable number of unknown arguments into the current namespace creates problems. It interferes with attempts at optimization, it can introduce bugs by importing unwanted symbols, it forces programmers writing code that might be imported that way to work to keep their namespaces clean, and it encourages complications like __all__ to try and avoid namespace pollution. Now it interferes with nested scopes. There are probably more problems I haven't thought of and new ones will probably crop up in the future. The use of import * is generally discouraged in all but well-defined cases ("from Tkinter import *", "from types import *") where the code was specifically written to be imported that way. For notational brevity in interactive use you can use import as (e.g., "import Tkinter as tk"). For use in modules and scripts it's probably best to simply use import module or explicitly grab the names you need from the module you're importing ("from types import StringType, ListType"). Both would improve the readability of the importing code. The only place I can see its use being more than a notational convenience is in wrapper modules like os and re and even there, it can be avoided. I believe in the long haul the correct thing to do is to deprecate import *. Skip From skip at mojam.com Wed Feb 21 14:47:59 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 21 Feb 2001 07:47:59 -0600 (CST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <019001c09bda$ffb6f4d0$e46940d5@hagrid> References: 
                              
                              <019001c09bda$ffb6f4d0$e46940d5@hagrid> Message-ID: <14995.50959.711260.497189@beluga.mojam.com> Fredrik> maybe it should be ifdef'ed out, and not switched on by default Fredrik> until we reach 3.0? I think that's a very reasonable path to take. Skip From fredrik at pythonware.com Wed Feb 21 16:30:35 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 21 Feb 2001 16:30:35 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: 
                              
                              <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <02a701c09c1b$40441e70$0900a8c0@SPIFF> > FL> doesn't sound like it really belongs in a X.1 release... > > So if we called the next release Python 3.0, it would be okay? yes. (but in case you do, I'm pretty sure someone else will release a 2.1 consisting of 2.0 plus all 2.0-compatible parts from 3.0) Cheers /F From fredrik at pythonware.com Wed Feb 21 16:42:35 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 21 Feb 2001 16:42:35 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102211510.QAA07814@core.inf.ethz.ch> Message-ID: <02bc01c09c1c$e9eb1950$0900a8c0@SPIFF> Samuele wrote: > On the other hand just saying that new feature X make code Y (previously valid) > meaningless and so the unique solution is to discard Y as garbage, > is something that cannot be sold for cheap. I have the feeling that this > is the *point*. exactly. I don't mind new features if I can chose to ignore them... Cheers /F From akuchlin at mems-exchange.org Wed Feb 21 15:56:25 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 09:56:25 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102211446.PAA07183@core.inf.ethz.ch>; from pedroni@inf.ethz.ch on Wed, Feb 21, 2001 at 03:46:40PM +0100 References: <200102211446.PAA07183@core.inf.ethz.ch> Message-ID: <20010221095625.A29605@ute.cnri.reston.va.us> On Wed, Feb 21, 2001 at 03:46:40PM +0100, Samuele Pedroni wrote: >IMO breaking code would be ok if we issue warnings today and implement >nested scopes issuing errors tomorrow. But this is simply a statement >about principles and raised impression. Agreed. So maybe that's the best solution: pull nested scopes from 2.1 and add a warning for from...import (and exec?) inside a function using nested scopes, and only add nested scopes in 2.2, after everyone has had 6 months or a year to fix their code. --amk From jeremy at alum.mit.edu Wed Feb 21 17:22:35 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 11:22:35 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <02a701c09c1b$40441e70$0900a8c0@SPIFF> References: 
                              
                              <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> Message-ID: <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> I did a brief review of three Python projects to see how they use import * and exec and to assess how much code will break in these projects. Project Python files Lines of import * exec illegal Python code in func in func exec Python 1127 113443 4? <57 0 Zope2 469 71370 0 15 1 PyXPCOM 26 2611 0 1 1 (excluding comment lines) The numbers are a little rough for Python, because I think I've fixed all the problems. As I recall, there were four instances of import * being using in a function. I think two of those would still be flagged as errors, while two would be allowed under the current rules (only barred when the current func contains another that has free variables). There is one illegal exec in Zope and one in PyXPCOM as Mark well knows. That makes a total of 4 fixes in almost 200,000 lines of code. These fixes should be pretty easy. The code won't compile until it's fixed. One could imagine many worse problems, like code that runs but has a different meaning. I should be able to fix the tracebacks so they indicate the source of the problem more clearly. I also realized that the exec rule is still too string. If the exec statement passes an explicit namespace -- "exec in foo" -- then there shouldn't be any problem, because the executed code can't affect the current namespace. If this form is allowed, the exec errors in xpcom and Zope disappear. It would be instructive to hear if the data would look different if I chose different projects. Perhaps the particular examples I chose are simply examples of excellent coding style by master programmers. Jeremy From pedroni at inf.ethz.ch Wed Feb 21 17:33:02 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 17:33:02 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211633.RAA10095@core.inf.ethz.ch> Hi. [Fredrik Lundh] > > Samuele wrote: > > On the other hand just saying that new feature X make code Y (previously valid) > > meaningless and so the unique solution is to discard Y as garbage, > > is something that cannot be sold for cheap. I have the feeling that this > > is the *point*. > > exactly. > > I don't mind new features if I can chose to ignore them... Along this line of thought and summarizing: - import * (in an inner scope) is somehow a problem but on the long run it should be likely deprecated and become an error anyway. - mixing of inner defs or lambdas and exec is a real issue (Mark Hammond original posting was caused but such a situation): for that there is no clear workaround: I repeat y=3 def f(): exec "y=2" def g() return y return g() if we want 2 as return value it's a mess (the problem could end up being more perfomance than complexity, altough simple impl is a long-run win). Developing special rules is also not that simple: just put an y = 9 before the exec, what is expected then? This promises lot of confusion. - I'm not a partisan of this, but if we want to able to "choose to ignore" lexical scoping, we will need to make its activation explicit. but this has been discarded, so no story... Implicit scoping semantic has been changed and now we just have to convince ourself that this is a win, and there is no big code breakage (this is very likely, without irony) and that transforming working code (I'm referring to code using 'exec's not import *) in invalid code is just natural language evolution that users will understand 
                              
                              . We can make the transition more smooth: [Andrew Kuchling] > >IMO breaking code would be ok if we issue warnings today and implement > >nested scopes issuing errors tomorrow. But this is simply a statement > >about principles and raised impression. > > Agreed. So maybe that's the best solution: pull nested scopes from > 2.1 and add a warning for from...import (and exec?) inside a function > using nested scopes, and only add nested scopes in 2.2, after everyone > has had 6 months or a year to fix their code. But the problem with exec will remain. PS: to be honest the actual impl of nested scope is fine for me from the viewpoint of the guy that should implement that for jython ;). From thomas.heller at ion-tof.com Wed Feb 21 17:39:09 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Wed, 21 Feb 2001 17:39:09 +0100 Subject: [Python-Dev] Strange import behaviour, recently introduced References: <20010221150634.AB6ED371690@snelboot.oratrix.nl> Message-ID: <036b01c09c24$d0aa20a0$e000a8c0@thomasnotebook> Jack Jansen wrote: > I'm running into strange problems with import in frozen Mac programs. > > On the Mac a program is frozen in a rather different way from how it happens > on Unix/Windows: basically all .pyc files are stuffed into resources, and if > the import code comes across a file on sys.path it will look for PYC resources > in that file. So, you freeze a program by stuffing all your modules into the > interpreter executable as PYC resources and setting sys.path to contain only > the executable file, basically. > > This week I noticed that these resource imports have suddenly become very very > slow. Whereas startup time of my application used to be around 2 seconds > (where the non-frozen version took 6 seconds) it now takes almost 20 times as > long. The non-frozen version still takes 6 seconds. > The most recent version calls PyImport_ImportModuleEx() for '__builtin__' for every import of __builtin__ without caching the result in a static variable. Can this be the cause? Thomas Heller From pedroni at inf.ethz.ch Wed Feb 21 17:40:24 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 17:40:24 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211640.RAA10296@core.inf.ethz.ch> Hi. So few code breakage is nice. [Jeremy Hilton] > I also realized that the exec rule is still too string. If the exec > statement passes an explicit namespace -- "exec in foo" -- then there > shouldn't be any problem, because the executed code can't affect the > current namespace. If this form is allowed, the exec errors in xpcom > and Zope disappear. My very personal feeling is that *any* rule on exec just sounds arbitrary (even if motived and acceptable). regards, Samuele Pedroni. From esr at thyrsus.com Wed Feb 21 17:42:18 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Wed, 21 Feb 2001 11:42:18 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221095625.A29605@ute.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Wed, Feb 21, 2001 at 09:56:25AM -0500 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> Message-ID: <20010221114218.A24682@thyrsus.com> Andrew Kuchling 
                              
                              : > On Wed, Feb 21, 2001 at 03:46:40PM +0100, Samuele Pedroni wrote: > >IMO breaking code would be ok if we issue warnings today and implement > >nested scopes issuing errors tomorrow. But this is simply a statement > >about principles and raised impression. > > Agreed. So maybe that's the best solution: pull nested scopes from > 2.1 and add a warning for from...import (and exec?) inside a function > using nested scopes, and only add nested scopes in 2.2, after everyone > has had 6 months or a year to fix their code. Aaargghh! I'm already using them. If we disable this facility temporarily, please do it with an ifdef I can set. -- 
                              Eric S. Raymond The prestige of government has undoubtedly been lowered considerably by the Prohibition law. For nothing is more destructive of respect for the government and the law of the land than passing laws which cannot be enforced. It is an open secret that the dangerous increase of crime in this country is closely connected with this. -- Albert Einstein, "My First Impression of the U.S.A.", 1921 From jeremy at alum.mit.edu Wed Feb 21 17:45:30 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 11:45:30 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102211640.RAA10296@core.inf.ethz.ch> References: <200102211640.RAA10296@core.inf.ethz.ch> Message-ID: <14995.61610.382858.122618@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SP" == Samuele Pedroni 
                              
                              writes: SP> My very personal feeling is that *any* rule on exec just sounds SP> arbitrary (even if motived and acceptable). My personal feeling is that exec is used rarely enough that a few restrictions on its use is not a problem. The restriction can be fairly minimal -- "exec" without "in" is not allowed in a function that contains nested blocks with free variables. Heck, we would just outlaw all uses of exec without in <0.5 wink>. I would argue for this rule in Python 3000, but it would break a lot more code than the restriction proposed above. Jeremy From pedroni at inf.ethz.ch Wed Feb 21 17:51:30 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 17:51:30 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211651.RAA10549@core.inf.ethz.ch> I should reformulate: I think a possible not arbitrary rule for exec is only exec ... in ... is valid, but this also something ok only on the long-run (like import * deprecation). Then it is necessary to agree on the semantic of locals(). What would happen right now mixing lexical scoping and exec ... in locals()? regards, Samuele Pedroni. From fredrik at pythonware.com Wed Feb 21 18:04:59 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 21 Feb 2001 18:04:59 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> Message-ID: <00ca01c09c28$70ea44c0$e46940d5@hagrid> Andrew Kuchling wrote: > >IMO breaking code would be ok if we issue warnings today and implement > >nested scopes issuing errors tomorrow. But this is simply a statement > >about principles and raised impression. > > Agreed. So maybe that's the best solution: pull nested scopes from > 2.1 and add a warning for from...import (and exec?) inside a function > using nested scopes, and only add nested scopes in 2.2, after everyone > has had 6 months or a year to fix their code. don't we have a standard procedure for this? http://python.sourceforge.net/peps/pep-0005.html Steps For Introducing Backwards-Incompatible Features 1. Propose backwards-incompatible behavior in a PEP. 2. Once the PEP is accepted as a productive direction, implement an alternate way to accomplish the task previously provided by the feature that is being removed or changed. 3. Formally deprecate the obsolete construct in the Python documentation. 4. Add an an optional warning mode to the parser that will inform users when the deprecated construct is used. 5. There must be at least a one-year transition period between the release of the transitional version of Python and the release of the backwards incompatible version. looks like we're somewhere around stage 3, which means that we're 12+ months away from deployment. Cheers /F From jeremy at alum.mit.edu Wed Feb 21 17:58:02 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 11:58:02 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102211651.RAA10549@core.inf.ethz.ch> References: <200102211651.RAA10549@core.inf.ethz.ch> Message-ID: <14995.62362.374756.796362@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SP" == Samuele Pedroni 
                              
                              writes: SP> I should reformulate: I think a possible not arbitrary rule for SP> exec is only exec ... in ... is valid, but this also something SP> ok only on the long-run (like import * deprecation). Yes. SP> Then it is necessary to agree on the semantic of locals(). That's easy. Make the warning in the current documentation a feature: locals() returns a dictionary representing the local symbol table. The effects of modifications to this dictionary is undefined. SP> What would happen right now mixing lexical scoping and exec SP> ... in locals()? Right now, the exec would get flagged as an error. If it were allowed to execute, the exec would operator on the frame's f_locals dict. The locals() builtin calls the following function. PyObject * PyEval_GetLocals(void) { PyFrameObject *current_frame = PyThreadState_Get()->frame; if (current_frame == NULL) return NULL; PyFrame_FastToLocals(current_frame); return current_frame->f_locals; } This copies all variables from the fast slots into the f_locals dictionary. When the exec statement is executed, it does the reverse copying from the locals dict back into the fast slots. The FastToLocals and LocalsToFast functions don't know anything about the closure, so those variables simply wouldn't affected. Assignments in the exec would be ignored by nested scopes. Jeremy From jeremy at alum.mit.edu Wed Feb 21 18:02:34 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 12:02:34 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <00ca01c09c28$70ea44c0$e46940d5@hagrid> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> Message-ID: <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> I don't recall seeing any substanital discussion of this PEP on python-dev or python-list, nor do I recall a BDFL decision on the PEP. There has been lots of discussion about backwards compatibility, but not much consensus. Jeremy From moshez at zadka.site.co.il Wed Feb 21 18:06:17 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Wed, 21 Feb 2001 19:06:17 +0200 (IST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221114218.A24682@thyrsus.com> References: <20010221114218.A24682@thyrsus.com>, <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> Message-ID: <20010221170617.DAE72A840@darjeeling.zadka.site.co.il> On Wed, 21 Feb 2001 11:42:18 -0500, "Eric S. Raymond" 
                              
                              wrote: [re: disabling nested scopes] > Aaargghh! I'm already using them. That's not a valid excuse. The official position of Python-Dev regarding alphas is "a feature is not in until it's a release candidate -- we reserve the right to pull features before" Whatever we do, ifdefing is not the answer -- two incompat. versions of Python with the same number? Are we insane? -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez at debian.org |http://www.{python,debian,gnu}.org From fredrik at effbot.org Wed Feb 21 19:01:05 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Wed, 21 Feb 2001 19:01:05 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <002301c09c30$46a89330$e46940d5@hagrid> Jeremy Hylton wrote: > I don't recall seeing any substanital discussion of this PEP on > python-dev or python-list, nor do I recall a BDFL decision on the > PEP. There has been lots of discussion about backwards compatibility, > but not much consensus. Really? If that's the case, maybe someone should move it to the "future" or "pie-in-the-sky" section, and mark it as "draft" instead of "active"? ::: ...and if stepwise deprecation isn't that important, why did a certain BDFL bother to implement a warning frame- work for 2.1? http://python.sourceforge.net/peps/pep-0230.html Looks like the perfect tool for this task. Why not use it? ::: Is it time to shut down python-dev? (yes, I'm serious) Annoyed /F From thomas at xs4all.net Wed Feb 21 19:13:17 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 21 Feb 2001 19:13:17 +0100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <002301c09c30$46a89330$e46940d5@hagrid>; from fredrik@effbot.org on Wed, Feb 21, 2001 at 07:01:05PM +0100 References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> Message-ID: <20010221191317.A26647@xs4all.nl> On Wed, Feb 21, 2001 at 07:01:05PM +0100, Fredrik Lundh wrote: > Is it time to shut down python-dev? (yes, I'm serious) Just in case it might not be obvious, I concur with Fredrik, and I usually try to have a bit less of a temper than him. I have to warn, though, I just came from a meeting with Ministry of Justice lawyers, so I'm not in that good a mood, though my mood does force me to drop my politeness and just say what I really mean: I keep running into the ugly sides of the principle of nested scopes in python, and the implementation in particular. Most of them could be fixed, but not *all* of them, and the impact of those that can't be fixed is entirely unclear. Will it break a lot of code ? Possibly. Will it annoy a lot of people ? Quite certainly, it already did. Will it force people to turn away in disgust ? Definately possibly, since it's nearly doing that for *me*. I'm not sure if I'd want to admit to people that I'm a Python developper if that means they'll ask me why in hell 2.1 was released with that deficiency. I have been able to argue my way out of the gripes I currently get, but I'm not sure if I can do that for 2.1. I think adding nested scopes like this is a very bad idea. Patching up the problems by adding more special cases in which the old syntax would work is not the right solution, even though I did initially think so. And I'd like to note that none of these issues were addressed in the PEP. The PEP doesn't even mention them, though 'from Tkinter import *' is used as an example code snippet. And it seems most people are either indifferent or against the whole thing. I personally think the old 'hack' is *way* clearer, and more obvious, than the nested scopes patch. But maybe my perception is flawed. Maybe all the pro-nested-scopes, pro-breakage people are keeping quiet, in which case I'll quietly sulk away in a corner ;P Mr.-Conservatively-Grumpy-ly y'rs, -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From esr at thyrsus.com Wed Feb 21 19:23:41 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Wed, 21 Feb 2001 13:23:41 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221191317.A26647@xs4all.nl>; from thomas@xs4all.net on Wed, Feb 21, 2001 at 07:13:17PM +0100 References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> Message-ID: <20010221132341.B25139@thyrsus.com> Thomas Wouters 
                              
                              : > But maybe my perception is flawed. Maybe all the pro-nested-scopes, > pro-breakage people are keeping quiet, in which case I'll quietly sulk away > in a corner ;P I am for nested scopes. I would like to see the problems fixed and this feature not abandoned. -- 
                              Eric S. Raymond Yes, the president should resign. He has lied to the American people, time and time again, and betrayed their trust. Since he has admitted guilt, there is no reason to put the American people through an impeachment. He will serve absolutely no purpose in finishing out his term, the only possible solution is for the president to save some dignity and resign. -- 12th Congressional District hopeful Bill Clinton, during Watergate From pedroni at inf.ethz.ch Wed Feb 21 19:54:06 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 19:54:06 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211854.TAA12664@core.inf.ethz.ch> I will try to be intellectually honest: [Thomas Wouters] > And I'd like to note that none of these issues were addressed in the PEP. This also a *point*. Few days ago I have scanned the pre-checkin archive on this topic, the fix-point was, under BDFL influence: - It will not do that much harm (but many issues were not raised) - Please no explicit syntax - Let's do it - Future newbies will be thankful because this was always a confusing point for them (if they come from pascal-like languages?) I should admit that I like the idea of nested scopes, because I like functional programming style, but I don't know whether this returning 3 is nice ;)? def f(): def g(): return y # put as many innoncent code lines as you like y=3 return g() The point is that nested scopes cause some harm, not that much but people are asking themself whether is that necessary. Maybe the request that old code should compile as it is, is a bit pedantic, and making it always work but with a new semantic is worse. But simply catching up as problem arise does not give a good impression. It really seems that there's not been enough discussion about the change, and I think that is also ok to honestely be worried about what user will feel about this? (and we can only think about this beacuse the feedback is not that much) Will this code breakage "scare" them and slow down migration to new versions of python? They are already afraid of going 2.0(?). It is maybe just PR matter but ... The *point* is that we are not going from version 0.8 to version 0.9 of our toy research lisp dialect, passing from dynamic scoping to lexical scoping. (Yes, I think, that changing semantic behind the scene is not a polite move.) We really need the BDFL proposing the right thing. regards, Samuele Pedroni. From pedroni at inf.ethz.ch Wed Feb 21 20:02:58 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 20:02:58 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211902.UAA12859@core.inf.ethz.ch> Sorry I forgot that a win is avoiding th old lambda default hack. Now things magically work ;). From jeremy at alum.mit.edu Wed Feb 21 20:09:43 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 14:09:43 -0500 (EST) Subject: [Python-Dev] Update to PEP 227 (static scoping) Message-ID: <14996.4727.604581.858363@w221.z064000254.bwi-md.dsl.cnc.net> There has been renewed discussion of backwards compatibility issues introduced by nested scopes. Following some discussion on python-dev, I have updated the discussion of these issues in the PEP. Of course, more comments are welcome. I am particularly interested in reports of actual compatibility issues with existing code, as opposed to hypotheticals. The particular concerns raised lately have to do with previously legal code that will fail with a SyntaxError with nested scopes. Early in the design process, there was discussion of code that will behave differently with nested scopes. At the time, the subtle behavior change was considered acceptable because it was believed to occur rarely in practice and was probably hard to understand to begin with. A related issue, already discussed on both lists, was the restrictions added in Python 2.1a2 on the use of import * in functions and exec with nested scope. The former restriction was always documented in the reference manual, but never enforced. Subsequently, we decided to allow import * and exec except in cases where the meaning was ambiguous with respect to nested scopes. This probably sounds a bit abstract; I hope the PEP (included below) spells out the issues more clearly. If you have code that currently depends on any of the three following behaviors, I'd like to hear about it: - A function is contained within another function. The outer function contains a local name that shadows a global name. The inner function uses the global. The one case of this I have seen in the wild was caused by a local variable named str in the outer function and a use of builtin str in the inner function. - A function that contains a nested function with free variables and also uses exec that does not specify a namespace, e.g. def f(): exec foo def g(): ... "exec foo in ns" should be legal, although the current CVS code base does not yet allow it. - A function like the one above, except that is uses import * instead of exec. Jeremy PEP: 227 Title: Statically Nested Scopes Version: $Revision: 1.6 $ Author: jeremy at digicool.com (Jeremy Hylton) Status: Draft Type: Standards Track Python-Version: 2.1 Created: 01-Nov-2000 Post-History: XXX what goes here? Abstract This PEP proposes the addition of statically nested scoping (lexical scoping) for Python 2.1. The current language definition defines exactly three namespaces that are used to resolve names -- the local, global, and built-in namespaces. The addition of nested scopes would allow resolution of unbound local names in enclosing functions' namespaces. One consequence of this change that will be most visible to Python programs is that lambda statements could reference variables in the namespaces where the lambda is defined. Currently, a lambda statement uses default arguments to explicitly creating bindings in the lambda's namespace. Introduction This proposal changes the rules for resolving free variables in Python functions. The Python 2.0 definition specifies exactly three namespaces to check for each name -- the local namespace, the global namespace, and the builtin namespace. According to this defintion, if a function A is defined within a function B, the names bound in B are not visible in A. The proposal changes the rules so that names bound in B are visible in A (unless A contains a name binding that hides the binding in B). The specification introduces rules for lexical scoping that are common in Algol-like languages. The combination of lexical scoping and existing support for first-class functions is reminiscent of Scheme. The changed scoping rules address two problems -- the limited utility of lambda statements and the frequent confusion of new users familiar with other languages that support lexical scoping, e.g. the inability to define recursive functions except at the module level. The lambda statement introduces an unnamed function that contains a single statement. It is often used for callback functions. In the example below (written using the Python 2.0 rules), any name used in the body of the lambda must be explicitly passed as a default argument to the lambda. from Tkinter import * root = Tk() Button(root, text="Click here", command=lambda root=root: root.test.configure(text="...")) This approach is cumbersome, particularly when there are several names used in the body of the lambda. The long list of default arguments obscure the purpose of the code. The proposed solution, in crude terms, implements the default argument approach automatically. The "root=root" argument can be omitted. Specification Python is a statically scoped language with block structure, in the traditional of Algol. A code block or region, such as a module, class defintion, or function body, is the basic unit of a program. Names refer to objects. Names are introduced by name binding operations. Each occurrence of a name in the program text refers to the binding of that name established in the innermost function block containing the use. The name binding operations are assignment, class and function definition, and import statements. Each assignment or import statement occurs within a block defined by a class or function definition or at the module level (the top-level code block). If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. (Note: This can lead to errors when a name is used within a block before it is bound.) If the global statement occurs within a block, all uses of the name specified in the statement refer to the binding of that name in the top-level namespace. Names are resolved in the top-level namespace by searching the global namespace, the namespace of the module containing the code block, and the builtin namespace, the namespace of the module __builtin__. The global namespace is searched first. If the name is not found there, the builtin namespace is searched. If a name is used within a code block, but it is not bound there and is not declared global, the use is treated as a reference to the nearest enclosing function region. (Note: If a region is contained within a class definition, the name bindings that occur in the class block are not visible to enclosed functions.) A class definition is an executable statement that may uses and definitions of names. These references follow the normal rules for name resolution. The namespace of the class definition becomes the attribute dictionary of the class. The following operations are name binding operations. If they occur within a block, they introduce new local names in the current block unless there is also a global declaration. Function defintion: def name ... Class definition: class name ... Assignment statement: name = ... Import statement: import name, import module as name, from module import name Implicit assignment: names are bound by for statements and except clauses The arguments of a function are also local. There are several cases where Python statements are illegal when used in conjunction with nested scopes that contain free variables. If a variable is referenced in an enclosing scope, it is an error to delete the name. The compiler will raise a SyntaxError for 'del name'. If the wildcard form of import (import *) is used in a function and the function contains a nested block with free variables, the compiler will raise a SyntaxError. If exec is used in a function and the function contains a nested block with free variables, the compiler will raise a SyntaxError unless the exec explicit specifies the local namespace for the exec. (In other words, "exec obj" would be illegal, but "exec obj in ns" would be legal.) Discussion The specified rules allow names defined in a function to be referenced in any nested function defined with that function. The name resolution rules are typical for statically scoped languages, with three primary exceptions: - Names in class scope are not accessible. - The global statement short-circuits the normal rules. - Variables are not declared. Names in class scope are not accessible. Names are resolved in the innermost enclosing function scope. If a class defintion occurs in a chain of nested scopes, the resolution process skips class definitions. This rule prevents odd interactions between class attributes and local variable access. If a name binding operation occurs in a class defintion, it creates an attribute on the resulting class object. To access this variable in a method, or in a function nested within a method, an attribute reference must be used, either via self or via the class name. An alternative would have been to allow name binding in class scope to behave exactly like name binding in function scope. This rule would allow class attributes to be referenced either via attribute reference or simple name. This option was ruled out because it would have been inconsistent with all other forms of class and instance attribute access, which always use attribute references. Code that used simple names would have been obscure. The global statement short-circuits the normal rules. Under the proposal, the global statement has exactly the same effect that it does for Python 2.0. It's behavior is preserved for backwards compatibility. It is also noteworthy because it allows name binding operations performed in one block to change bindings in another block (the module). Variables are not declared. If a name binding operation occurs anywhere in a function, then that name is treated as local to the function and all references refer to the local binding. If a reference occurs before the name is bound, a NameError is raised. The only kind of declaration is the global statement, which allows programs to be written using mutable global variables. As a consequence, it is not possible to rebind a name defined in an enclosing scope. An assignment operation can only bind a name in the current scope or in the global scope. The lack of declarations and the inability to rebind names in enclosing scopes are unusual for lexically scoped languages; there is typically a mechanism to create name bindings (e.g. lambda and let in Scheme) and a mechanism to change the bindings (set! in Scheme). XXX Alex Martelli suggests comparison with Java, which does not allow name bindings to hide earlier bindings. Examples A few examples are included to illustrate the way the rules work. XXX Explain the examples >>> def make_adder(base): ... def adder(x): ... return base + x ... return adder >>> add5 = make_adder(5) >>> add5(6) 11 >>> def make_fact(): ... def fact(n): ... if n == 1: ... return 1L ... else: ... return n * fact(n - 1) ... return fact >>> fact = make_fact() >>> fact(7) 5040L >>> def make_wrapper(obj): ... class Wrapper: ... def __getattr__(self, attr): ... if attr[0] != '_': ... return getattr(obj, attr) ... else: ... raise AttributeError, attr ... return Wrapper() >>> class Test: ... public = 2 ... _private = 3 >>> w = make_wrapper(Test()) >>> w.public 2 >>> w._private Traceback (most recent call last): File "
                              
                              ", line 1, in ? AttributeError: _private An example from Tim Peters of the potential pitfalls of nested scopes in the absence of declarations: i = 6 def f(x): def g(): print i # ... # skip to the next page # ... for i in x: # ah, i *is* local to f, so this is what g sees pass g() The call to g() will refer to the variable i bound in f() by the for loop. If g() is called before the loop is executed, a NameError will be raised. XXX need some counterexamples Backwards compatibility There are two kinds of compatibility problems caused by nested scopes. In one case, code that behaved one way in earlier versions, behaves differently because of nested scopes. In the other cases, certain constructs interact badly with nested scopes and will trigger SyntaxErrors at compile time. The following example from Skip Montanaro illustrates the first kind of problem: x = 1 def f1(): x = 2 def inner(): print x inner() Under the Python 2.0 rules, the print statement inside inner() refers to the global variable x and will print 1 if f1() is called. Under the new rules, it refers to the f1()'s namespace, the nearest enclosing scope with a binding. The problem occurs only when a global variable and a local variable share the same name and a nested function uses that name to refer to the global variable. This is poor programming practice, because readers will easily confuse the two different variables. One example of this problem was found in the Python standard library during the implementation of nested scopes. To address this problem, which is unlikely to occur often, a static analysis tool that detects affected code will be written. The detection problem is straightfoward. The other compatibility problem is casued by the use of 'import *' and 'exec' in a function body, when that function contains a nested scope and the contained scope has free variables. For example: y = 1 def f(): exec "y = 'gotcha'" # or from module import * def g(): return y ... At compile-time, the compiler cannot tell whether an exec that operators on the local namespace or an import * will introduce name bindings that shadow the global y. Thus, it is not possible to tell whether the reference to y in g() should refer to the global or to a local name in f(). In discussion of the python-list, people argued for both possible interpretations. On the one hand, some thought that the reference in g() should be bound to a local y if one exists. One problem with this interpretation is that it is impossible for a human reader of the code to determine the binding of y by local inspection. It seems likely to introduce subtle bugs. The other interpretation is to treat exec and import * as dynamic features that do not effect static scoping. Under this interpretation, the exec and import * would introduce local names, but those names would never be visible to nested scopes. In the specific example above, the code would behave exactly as it did in earlier versions of Python. Since each interpretation is problemtatic and the exact meaning ambiguous, the compiler raises an exception. A brief review of three Python projects (the standard library, Zope, and a beta version of PyXPCOM) found four backwards compatibility issues in approximately 200,000 lines of code. There was one example of case #1 (subtle behavior change) and two examples of import * problems in the standard library. (The interpretation of the import * and exec restriction that was implemented in Python 2.1a2 was much more restrictive, based on language that in the reference manual that had never been enforced. These restrictions were relaxed following the release.) locals() / vars() These functions return a dictionary containing the current scope's local variables. Modifications to the dictionary do not affect the values of variables. Under the current rules, the use of locals() and globals() allows the program to gain access to all the namespaces in which names are resolved. An analogous function will not be provided for nested scopes. Under this proposal, it will not be possible to gain dictionary-style access to all visible scopes. Rebinding names in enclosing scopes There are technical issues that make it difficult to support rebinding of names in enclosing scopes, but the primary reason that it is not allowed in the current proposal is that Guido is opposed to it. It is difficult to support, because it would require a new mechanism that would allow the programmer to specify that an assignment in a block is supposed to rebind the name in an enclosing block; presumably a keyword or special syntax (x := 3) would make this possible. The proposed rules allow programmers to achieve the effect of rebinding, albeit awkwardly. The name that will be effectively rebound by enclosed functions is bound to a container object. In place of assignment, the program uses modification of the container to achieve the desired effect: def bank_account(initial_balance): balance = [initial_balance] def deposit(amount): balance[0] = balance[0] + amount return balance def withdraw(amount): balance[0] = balance[0] - amount return balance return deposit, withdraw Support for rebinding in nested scopes would make this code clearer. A class that defines deposit() and withdraw() methods and the balance as an instance variable would be clearer still. Since classes seem to achieve the same effect in a more straightforward manner, they are preferred. Implementation The implementation for C Python uses flat closures [1]. Each def or lambda statement that is executed will create a closure if the body of the function or any contained function has free variables. Using flat closures, the creation of closures is somewhat expensive but lookup is cheap. The implementation adds several new opcodes and two new kinds of names in code objects. A variable can be either a cell variable or a free variable for a particular code object. A cell variable is referenced by containing scopes; as a result, the function where it is defined must allocate separate storage for it on each invocation. A free variable is reference via a function's closure. XXX Much more to say here References [1] Luca Cardelli. Compiling a functional language. In Proc. of the 1984 ACM Conference on Lisp and Functional Programming, pp. 208-217, Aug. 1984 http://citeseer.nj.nec.com/cardelli84compiling.html From akuchlin at mems-exchange.org Wed Feb 21 20:33:23 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 14:33:23 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221191317.A26647@xs4all.nl>; from thomas@xs4all.net on Wed, Feb 21, 2001 at 07:13:17PM +0100 References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> Message-ID: <20010221143323.B1441@ute.cnri.reston.va.us> On Wed, Feb 21, 2001 at 07:13:17PM +0100, Thomas Wouters wrote: >But maybe my perception is flawed. Maybe all the pro-nested-scopes, >pro-breakage people are keeping quiet, in which case I'll quietly sulk away >in a corner ;P The scoping rules are, IMHO, the most serious problem listed on the Python Warts page, and adding nested scopes fixes them. So it's nice that this flaw could be cleaned up, though people will naturally differ in their perceptions of how serious the problem is, and how much pain it's worth to fix it. >On Wed, Feb 21, 2001 at 07:01:05PM +0100, Fredrik Lundh wrote: >> Is it time to shut down python-dev? (yes, I'm serious) I've previously stated my intention to unsubscribe from python-dev after 2.1 ships, mostly because hacking on the Python core has ceased to be fun any more, and because my non-core projects have suffered. Once that happens, the incentive to try out new Python versions will really ebb; if I wasn't on python-dev, I don't think upgrading to 2.1 would be a big priority because none of its new features solve any burning problems for me. It's hard to say what compelling new features would make me enthuastically adopt 2.2 as soon as it comes out, and I can't really think of any -- perhaps interfaces would be such a feature. You can take that as lukewarm agreement with Fredrik's rhetorical suggestion. --amk From jeremy at alum.mit.edu Wed Feb 21 20:35:02 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 14:35:02 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221143323.B1441@ute.cnri.reston.va.us> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> <20010221143323.B1441@ute.cnri.reston.va.us> Message-ID: <14996.6246.44518.351404@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "AMK" == Andrew Kuchling 
                              
                              writes: >> On Wed, Feb 21, 2001 at 07:01:05PM +0100, Fredrik Lundh wrote: >>> Is it time to shut down python-dev? (yes, I'm serious) AMK> I've previously stated my intention to unsubscribe from AMK> python-dev after 2.1 ships, mostly because hacking on the AMK> Python core has ceased to be fun any more, and because my AMK> non-core projects have suffered. We're coming up on the second anniversary of python-dev. It began in April 1999 if the archives are correct. The biggest change to Python development since then has been the move to SourceForge, which happened nine months ago. (Curiously enough, the first python-dev message is on April 21, the SF announcement was on May 21, and today is Feb. 21.) Do you think Python development has changed in ways that make it no longer fun? Or do you think that you've changed in ways that make you no longer enjoy Python development? I'm sure that it's not as simple as one or the other, but I wonder if you think changes in the way we all interact is an important contributing factor. Jeremy From akuchlin at mems-exchange.org Wed Feb 21 20:50:16 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 14:50:16 -0500 Subject: [Python-Dev] Notice: Beta of wininst with uninstaller Message-ID: 
                              
                              Thomas Heller just sent a message to the Distutils SIG described a proposed uninstaller for the bdist_wininst command. Windows-oriented people who don't follow the SIG may want to take a look at his proposal and offer comments. His message is archived at: http://mail.python.org/pipermail/distutils-sig/2001-February/001991.html --amk From akuchlin at mems-exchange.org Wed Feb 21 21:02:33 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 15:02:33 -0500 Subject: [Python-Dev] Re: dl module Message-ID: 
                              
                              On 10 Feb, GvR quoted and wrote: >> Skip Montanaro writes: >> > MAL> The same could be done for e.g. soundex ... >> >> Fred Drake wrote: >> Given that Skip has published this module and that the C version can >> always be retrieved from CVS if anyone really wants it, and that >> soundex has been listed in the "Obsolete Modules" section in the >> documentation for quite some time, this is probably a good time to >> remove it from the source distribution. > >Yes, go ahead. Guido, did you mean go ahead and remove soundex, or the dl module, or both? --amk From akuchlin at mems-exchange.org Wed Feb 21 21:05:17 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 15:05:17 -0500 Subject: [Python-Dev] python-dev social climate In-Reply-To: <14996.6246.44518.351404@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Wed, Feb 21, 2001 at 02:35:02PM -0500 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> <20010221143323.B1441@ute.cnri.reston.va.us> <14996.6246.44518.351404@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010221150517.D1441@ute.cnri.reston.va.us> On Wed, Feb 21, 2001 at 02:35:02PM -0500, Jeremy Hylton wrote: >Do you think Python development has changed in ways that make it no >longer fun? Or do you think that you've changed in ways that make you >no longer enjoy Python development? I'm sure that it's not as simple Mostly me; I'm trying to decrease my CPU load and have dropped a number of activities. I've mostly lost my taste for language hackery, and find that the discussions are getting more trivial and less interesting. Adding Unicode support, for example, was a lengthy and at times bloody discussion, but it resulted in a significant new capability. Debate about whether 'A in dict' is the same as 'A in dict.keys()' or 'A in dict.values()' is IMHO quite dull. Twhe unit testing debate was the last one I cared about to any significant degree. --amk From thomas.heller at ion-tof.com Wed Feb 21 21:17:56 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Wed, 21 Feb 2001 21:17:56 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> <20010221143323.B1441@ute.cnri.reston.va.us> Message-ID: <00cf01c09c43$60e360f0$e000a8c0@thomasnotebook> Andrew Kuchling wrote: > The scoping rules are, IMHO, the most serious problem listed on the > Python Warts page, and adding nested scopes fixes them. There is some truth in this, although most books I know try hard to explain this. Once you've understood it, it becomes a second nature to use this knowledge for lambda. I would consider the type/class split, making something like ExtensionClass neccessary, much more annoying for the advanced programmer. IMHO more efforts should go into this issue _even before_ p3000. Regards, Thomas From skip at mojam.com Wed Feb 21 21:52:48 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 21 Feb 2001 14:52:48 -0600 (CST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> References: 
                              
                              <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14996.10912.667104.603750@beluga.mojam.com> Jeremy> That makes a total of 4 fixes in almost 200,000 lines of code. Jeremy> These fixes should be pretty easy. Jeremy, Pardon my bluntness, but I think you're missing the point. The fact that it would be easy to make these changes for version N+1 of package XYZ ignores the fact that users of XYZ version N may want to upgrade to Python 2.1 for whatever reason, but can't easily upgrade to XYZ version N+1. Maybe they need to pay an upgrade fee. Maybe they include XYZ in another product and can't afford to run too far ahead of their clients. Maybe XYZ is available to them only as bytecode. Maybe there's just too darn much code to pore through and retest. Maybe ... I've rarely found it difficult to fix compatibility problems in isolation. It's the surrounding context that gets you. Skip From fredrik at effbot.org Wed Feb 21 22:12:03 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Wed, 21 Feb 2001 22:12:03 +0100 Subject: [Python-Dev] compile leaks memory. lots of memory. Message-ID: <009301c09c4a$f26cbf60$e46940d5@hagrid> while 1: compile("print 'hello'\n", "
                              
                              ", "exec") current CVS leaks just over 1k per call to compile. 1.5.2 and 2.0 doesn't leak a byte. make the script a little more complex, and it leaks even more (4k for a small function, 650k for Tkinter.py, etc). Cheers /F From jeremy at alum.mit.edu Wed Feb 21 22:07:25 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 16:07:25 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14996.10912.667104.603750@beluga.mojam.com> References: 
                              
                              <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> <14996.10912.667104.603750@beluga.mojam.com> Message-ID: <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro 
                              
                              writes: Jeremy> That makes a total of 4 fixes in almost 200,000 lines of Jeremy> code. These fixes should be pretty easy. SM> Jeremy, SM> Pardon my bluntness, but I think you're missing the point. I don't mind if you're blunt :-). SM> I've rarely found it difficult to fix compatibility problems in SM> isolation. It's the surrounding context that gets you. I appreciate that there are compatibility problems, although I'm hard pressed to quantify them to any extent. My employer still uses Python 1.5.2 because of perceived compatibility problems, although I use Zope with 2.1 on my machine. Any change we make to Python that introduces incompatibilties is going to make it hard for some people to upgrade. When we began work on the 2.1 alpha cycle, I have the impression that we decided that some amount of incompatibility is acceptable. I think PEP 227 is the chief incompatibility, but there are other changes. For example, the warnings framework now spits out messages to stderr; I imagine this could be unacceptable in some situtations. The __all__ change might cause problems for some code, as we saw with the pickle module. The format of exceptions has changed in some cases, which makes trouble for users of doctest. I'll grant you that there is are differences in degree among these various changes. Nonetheless, any of them could be a potential roadblock for upgrading. There were a bunch more in 2.0. (Sidenote: If you haven't upgraded to 2.0 yet, then you can jump right to 2.1 when you finally do.) The recent flurry of discussion was generated by a single complaint about the exec problem. It appeared to me that this was the last straw for many people, and you, among others, suggested today that we delay nested scopes. This surprised me, because the problem was much shallower than some of the other compatibility issues that had been discussed earlier, including the one attributed to you in the PEP. If I understand correctly, though, you are objecting to any changes that introduce backwards compatibility. The fact that recent discussion prompted you to advocate this is coincidental. The question, then, is whether some amount of incompatible change is acceptable in the 2.1 release. I don't think the specific import */exec issues have anything to do with it, because if they didn't exist there would still be compatibility issues. Jeremy From barry at digicool.com Wed Feb 21 22:19:47 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Wed, 21 Feb 2001 16:19:47 -0500 Subject: [Python-Dev] compile leaks memory. lots of memory. References: <009301c09c4a$f26cbf60$e46940d5@hagrid> Message-ID: <14996.12531.749097.806945@anthem.wooz.org> >>>>> "FL" == Fredrik Lundh 
                              
                              writes: FL> while 1: compile("print 'hello'\n", "
                              
                              ", "exec") FL> current CVS leaks just over 1k per call to compile. FL> 1.5.2 and 2.0 doesn't leak a byte. FL> make the script a little more complex, and it leaks even FL> more (4k for a small function, 650k for Tkinter.py, etc). I have plans to spend a fair bit of time running memory/leak analysis over Python after the conference. I'm kind of waiting until we enter beta, i.e. feature freeze. -Barry From jeremy at alum.mit.edu Wed Feb 21 22:10:15 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 16:10:15 -0500 (EST) Subject: [Python-Dev] compile leaks memory. lots of memory. In-Reply-To: <14996.12531.749097.806945@anthem.wooz.org> References: <009301c09c4a$f26cbf60$e46940d5@hagrid> <14996.12531.749097.806945@anthem.wooz.org> Message-ID: <14996.11959.173739.282750@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "BAW" == Barry A Warsaw 
                              
                              writes: >>>>> "FL" == Fredrik Lundh 
                              
                              writes: FL> while 1: compile("print 'hello'\n", "
                              
                              ", "exec") FL> current CVS leaks just over 1k per call to compile. FL> 1.5.2 and 2.0 doesn't leak a byte. FL> make the script a little more complex, and it leaks even more FL> (4k for a small function, 650k for Tkinter.py, etc). BAW> I have plans to spend a fair bit of time running memory/leak BAW> analysis over Python after the conference. I'm kind of waiting BAW> until we enter beta, i.e. feature freeze. It would be helpful to get some analysis on this known problem before the beta release. Jeremy From paulp at ActiveState.com Wed Feb 21 22:48:28 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Wed, 21 Feb 2001 13:48:28 -0800 Subject: [Python-Dev] Backwards Incompatibility References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A9437AC.4B2C77E7@ActiveState.com> Jeremy Hylton wrote: > > I don't recall seeing any substanital discussion of this PEP on > python-dev or python-list, nor do I recall a BDFL decision on the > PEP. There has been lots of discussion about backwards compatibility, > but not much consensus. We can have the discussion now, then. In my opinion it is irresponsible to knowingly unleash backwards incompatibilities on the world with no warning. If people think Python is unstable it will negatively impact its growth much more than the delay of some esoteric features. Let me put the ball back in your court: Is the benefit provided by having nested scopes this year rather than next year worth the pain of howls of outrage in Python-land. If we give people a year to upgrade (with warning messages) they will (rightly) grumble but not scream. -- Vote for Your Favorite Python & Perl Programming Accomplishments in the first Active Awards! http://www.ActiveState.com/Awards From jeremy at alum.mit.edu Wed Feb 21 22:53:21 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 16:53:21 -0500 (EST) Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <3A9437AC.4B2C77E7@ActiveState.com> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> Message-ID: <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "PP" == Paul Prescod 
                              
                              writes: PP> Jeremy Hylton wrote: >> >> I don't recall seeing any substanital discussion of this PEP on >> python-dev or python-list, nor do I recall a BDFL decision on the >> PEP. There has been lots of discussion about backwards >> compatibility, but not much consensus. PP> We can have the discussion now, then. In my opinion it is PP> irresponsible to knowingly unleash backwards incompatibilities PP> on the world with no warning. If people think Python is unstable PP> it will negatively impact its growth much more than the delay of PP> some esoteric features. You have a colorful way of writing :-). When we unleashed Python 2.1a1, there was a fair amount of discussion about nested scopes on python-dev and on python-list. The fact that code would break has been documented in the PEP since December, before the BDFL pronounced on it. Why didn't you say it was irresponsible then? <0.5 wink> If you're just repeating your earlier arguments, I apologize for the rhetoric :-). PP> Let me put the ball back in your court: PP> Is the benefit provided by having nested scopes this year rather PP> than next year worth the pain of howls of outrage in PP> Python-land. If we give people a year to upgrade (with warning PP> messages) they will (rightly) grumble but not scream. I've heard plenty of hypothetical howls and one real one, from Mark. The alpha testing hasn't resulted in a lot of other complaints. I just asked on c.l.py for problem reports and /F followed up with a script to help find problems. Let's see what the result is. I ran Fredrik's script over 4700 source files on my machine and found exactly four errors. Two were from old copies of the Python CVS tree; they've been fixed in the current tree. One was from Zope and another was an *old* jpython test case. Jeremy From thomas at xs4all.net Wed Feb 21 23:29:38 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 21 Feb 2001 23:29:38 +0100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14995.55080.928806.56317@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Wed, Feb 21, 2001 at 09:56:40AM -0500 References: <14995.8522.253084.230222@beluga.mojam.com> 
                              
                              <20010220222936.A2477@newcnri.cnri.reston.va.us> <20010221074710.E13911@xs4all.nl> <14995.55080.928806.56317@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010221232938.O26620@xs4all.nl> On Wed, Feb 21, 2001 at 09:56:40AM -0500, Jeremy Hylton wrote: > A note of clarification seems important here: The restrictions are > not being introduced to simplify the implementation. They're being > introduced because there is no sensible meaning for code that uses > import * and nested scopes with free variables. There are two > possible meanings, each plausible and neither satisfying. I disagree. There are several ways to work around them, or the BDFL could just make a decision on what it should mean. The decision between using a local vrbl in an upper scope or a possible global is about as arbritrary as what 'if key in dict:' and 'for key in dict' should do. I personally think it should behave exactly like: def outer(x, y): a = ... from module import * def inner(x, y, z=a): ... used to behave (before it became illegal.) That also makes it easy to explain to people who already know the rule. A possibly more practical solution would be to explicitly require a keyword to declare vrbls that should be taken from an upper scope rather than the global scope. Or a new keyword to define a closure. (def closure NAME(): comes to mind.) Lots of alternatives available if the implementation of PEP227 can't be done without introducing backwards incompatibility and strange special cases. Because you have to admit (even though it's another hypothetical howl) that it is odd that a function would *stop functioning* when you change a lambda (or nested function) to use a closure, rather than the old hack: def inner(x): exec ... myprint = sys.stderr.write spam = lambda x, myprint=myprint: myprint(x*100) I don't *just* object to the backwards incompatibility, but also to the added complexity and the strange special cases, most of which were introduced (at my urging, I'll readily admit and for which I should and do appologize) to reduce the impact of the incompatibility. I do not believe the ability to leave out the default-argument-hack (if you don't use import-*/exec in the same function) is worth all that. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From thomas at xs4all.net Wed Feb 21 23:33:34 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 21 Feb 2001 23:33:34 +0100 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Wed, Feb 21, 2001 at 04:53:21PM -0500 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010221233334.B26647@xs4all.nl> On Wed, Feb 21, 2001 at 04:53:21PM -0500, Jeremy Hylton wrote: > When we unleashed Python 2.1a1, there was a fair amount of discussion > about nested scopes on python-dev and on python-list. Nested scopes weren't in 2.1a1, they were added between 2.1a1 and 2.1a2. > The fact that code would break has been documented in the PEP since > December, before the BDFL pronounced on it. The PEP only mentions one type of breakage, a local vrbl in an upper scope shadowing a global. It doesn't mention exec or from-module-import-*. I don't recall seeing a BDFL pronouncement on this issue, though I did whine about the whole thing from the start ;-P > I've heard plenty of hypothetical howls and one real one, from Mark. Don't forget that the std. library itself had to be fixed in several places, because it violated the reference manual. Doesn't that hint that there is much more code out there that uses it ? I found two instances myself in old first-attempt GUI scripts of mine, which I never finished and thus aren't worth much more than the hypothetical howls. This is like spanking the dog/kid for doing something bad he had no way of knowing was bad. You can't expect the dog or the kid to read up on federal law to make sure he isn't doing anything bad by accident. Besides from any real problems we'll see, the added wartiness (which is what the hypothetical howls are all about) does really matter. What are we trying to solve with nested scopes ? Anything other than the default-argument hack wart ? Aren't we adding more warts to fix that one wart ? -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From akuchlin at mems-exchange.org Wed Feb 21 23:41:41 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 17:41:41 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <20010221233334.B26647@xs4all.nl>; from thomas@xs4all.net on Wed, Feb 21, 2001 at 11:33:34PM +0100 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> Message-ID: <20010221174141.B25792@ute.cnri.reston.va.us> On Wed, Feb 21, 2001 at 11:33:34PM +0100, Thomas Wouters wrote: >Besides from any real problems we'll see, the added wartiness (which is what >the hypothetical howls are all about) does really matter. What are we trying >to solve with nested scopes ? Anything other than the default-argument hack >wart ? Aren't we adding more warts to fix that one wart ? I wouldn't consider either nested scopes or the additional restrictions really warts. 'from...import *' is already somewhat frowned upon, and often people use exec in situations where something else would be a better solution (storing variable names in a dictionary instead of exec'ing 'varname=expr'). If we were starting from a clean slate, I'd say accepting nested scopes would be a no-brainer. Compatibility... ay, there's the rub! --amk From thomas at xs4all.net Wed Feb 21 23:47:22 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 21 Feb 2001 23:47:22 +0100 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <20010221174141.B25792@ute.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Wed, Feb 21, 2001 at 05:41:41PM -0500 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> <20010221174141.B25792@ute.cnri.reston.va.us> Message-ID: <20010221234722.C26647@xs4all.nl> On Wed, Feb 21, 2001 at 05:41:41PM -0500, Andrew Kuchling wrote: > Compatibility... ay, there's the rub! If you include 'ways of thinking' in 'compatibility', I'll agree. Many people are used to being able to use exec/from-foo-import-*, and consider it part of Python's wonderful flexibility and straightforwardness (I know I do, and all my python-proficient and python-learning colleagues do.) -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From MarkH at ActiveState.com Wed Feb 21 23:55:34 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Thu, 22 Feb 2001 09:55:34 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221232938.O26620@xs4all.nl> Message-ID: 
                              
                              [Thomas W] > appologize) to reduce the impact of the incompatibility. I do not believe > the ability to leave out the default-argument-hack (if you don't use > import-*/exec in the same function) is worth all that. Ironically, I _fixed_ my original problem by _adding_ a default-argument-hack. This meant my lambda no longer used a global name but a local one. Well, I think it ironic anyway :) For the record, the only reason I had to use exec in that case was because the "new" module is not capable creating a new method. Trying to compile a block of code with a "return" statement but no function decl (to create a code object suitable for a method) fails at compile time. Like-sands-through-the-hourglass ly, Mark. From pedroni at inf.ethz.ch Thu Feb 22 00:25:15 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Thu, 22 Feb 2001 00:25:15 +0100 (MET) Subject: [Python-Dev] again on nested scopes and Backwards Incompatibility Message-ID: <200102212325.AAA20597@core.inf.ethz.ch> Hi. This my last effort for today ;). [Thomas Wouters] > On Wed, Feb 21, 2001 at 05:41:41PM -0500, Andrew Kuchling wrote: > > > Compatibility... ay, there's the rub! > > If you include 'ways of thinking' in 'compatibility', I'll agree. Many > people are used to being able to use exec/from-foo-import-*, and consider it > part of Python's wonderful flexibility and straightforwardness (I know I do, > and all my python-proficient and python-learning colleagues do.) > 1) I'm convinced that on the long run that both: - import * - exec without in should be deprecated, so we could start issueing warning with 2.1 or 2.2 and make them errors when people get annoyed by the warnings enough ;) This has nothing to do with nested scopes. So people have time to change their mind. 2) The actual implementation of nested scopes (with or without compatibilty hacks) is based on the assumption that - one can detect lexically scoped variables as up 2.0 python was able to detect local vars (without the need of explicit declarations) -, and this is pythonic and neat, so let's do it. But this thread and the matter of fact that with the implementation some old code is not more valid or behave in a different way shows that maybe (I say maybe) this assumption is not completely valid. It is clear too that this difference between reality and theory has not that big predictable consequences, it's just annoying for some among us. But a survey among users to detect the extent of this has started. But from the theoretical (and maybe PR?) viewpoint the difference exists. On the other hand the (potential) solution (wich I'm aware open some other subtle issues to discuss but keep old code working as it was) of using some kind of explicit declarations is a no-go, no-story. Yes is not that much pythonic... Is'nt it possible to be all happy? I'm wondering if we have not transformed in an holy war a problem that offer at least some space for a technical discussion. regards, Samuele Pedroni. PS: sorry for my abuse of we given that I'm jython devel not a python one, but it is already difficult so... I feel I'm missing something about this group dynamics. From paulp at ActiveState.com Thu Feb 22 00:40:12 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Wed, 21 Feb 2001 15:40:12 -0800 Subject: [Python-Dev] Backwards Incompatibility References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A9451DC.143C5FCC@ActiveState.com> Jeremy Hylton wrote: > >... > > Why didn't you say it was irresponsible then? <0.5 wink> If you're > just repeating your earlier arguments, I apologize for the rhetoric > :-). I haven't followed this PEP at all. I think the feature is neat and I would like it. But to the average person, this is a pretty esoteric issue. But I do think that we should have a general principle that we do not knowingly break code without warning. It doesn't matter what the particular PEP is. It doesn't matter whether I like it. The reason I wrote the backwards compatibility PEP as not to restrict change but to enable it. If people trust us (they do not yet) then we can discuss long-term migration paths that may break code but they will be comfortable that they will have plenty of opportunity to move into the new world. So we could decide to change the keyword "def" to "define" and people would know that the change over would take a couple of years and they would be able to get from here to there. -- Vote for Your Favorite Python & Perl Programming Accomplishments in the first Active Awards! http://www.ActiveState.com/Awards From skip at mojam.com Thu Feb 22 00:13:46 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 21 Feb 2001 17:13:46 -0600 (CST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> References: 
                              
                              <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> <14996.10912.667104.603750@beluga.mojam.com> <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14996.19370.133024.802787@beluga.mojam.com> Jeremy> The question, then, is whether some amount of incompatible Jeremy> change is acceptable in the 2.1 release. I think of 2.1 as a minor release. Minor releases generally equate in my mind with bug fixes, not significant functionality changes or potential compatibility problems. I think many other people feel the same way. Earlier this month I suggested that adopting a release numbering scheme similar to that used for the Linux kernel would be appropriate. Perhaps it's not so much the details of the numbering as the up-front statement of something like, "version numbers like x.y where y is even represent stable releases" or, "backwards incompatibility will only be introduced when the major version number is incremented". It's more that there is a statement about stability vs new features that serves as a published committment the user community can rely on. After all the changes that made it into 2.0, I don't think anyone to have to address compatibility problems with 2.1. Skip From greg at cosc.canterbury.ac.nz Thu Feb 22 01:04:53 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 22 Feb 2001 13:04:53 +1300 (NZDT) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: 
                              
                              Message-ID: <200102220004.NAA01374@s454.cosc.canterbury.ac.nz> > Trying to compile a > block of code with a "return" statement but no function decl (to create a > code object suitable for a method) fails at compile time. Maybe you could add a dummy function header, compile that, and extract the code object from the resulting function object? Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From guido at digicool.com Thu Feb 22 01:11:07 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 19:11:07 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 19:01:05 +0100." <002301c09c30$46a89330$e46940d5@hagrid> References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> Message-ID: <200102220011.TAA12030@cj20424-a.reston1.va.home.com> > Is it time to shut down python-dev? (yes, I'm serious) I've been out in meetings all day, and just now checking my email. I'm a bit surprised by this sudden uprising. From the complaints so far, I don't really believe it's so bad. The embargo on not breaking code has never been absolute in my view. I do want to minimize breakage, but in the end my goal is to make people happy -- trying not to break code is only a means to that goal. It so happens that nested scopes will make many people happy too (if only because it allows references to surrounding locals from nested lambdas). I also don't mind as much breaking code that I consider ugly. I find import * inside a function very ugly (because I happen to know how much time it wastes). I find exec (without the ``in dict1, dict2'' clause) also pretty ugly, and usually being misused. I don't want to roll back nested scopes unless there's a lot more evidence that they are evil. Go through the PythonWare code base and look for code that would break -- and report back in the same style that Jeremy used. (Jeremy, it would help if you provided the tool you used for this analysis.) I remember you complained loudly about requiring list.append((x, y)) and socket.connect((host, port)) too -- but once you had fixed your code I didn't hear from you again, and I haven't had much feedback that this is a problem for the general population either. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 01:12:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 19:12:11 -0500 Subject: [Python-Dev] RE: Update to PEP 232 In-Reply-To: Your message of "Wed, 21 Feb 2001 10:06:34 GMT." <000901c09bed$f861d750$f05aa8c0@lslp7o.int.lsl.co.uk> References: <000901c09bed$f861d750$f05aa8c0@lslp7o.int.lsl.co.uk> Message-ID: <200102220012.TAA12047@cj20424-a.reston1.va.home.com> > Small pedantry (there's another sort?) > > I note that: > > > - __doc__ is the only function attribute that currently has > > syntactic support for conveniently setting. It may be > > worthwhile to eventually enhance the language for supporting > > easy function attribute setting. Here are some syntaxes > > suggested by PEP reviewers: > [...elided to save space!...] > > It isn't currently clear if special syntax is necessary or > > desirable. > > has not been changed since the last version of the PEP. I suggest that > it be updated in two ways: > > 1. Clarify the final statement - I seem to have the impression (sorry, > can't find a message to back it up) that either the BDFL or Tim Peters > is very against anything other than the "simple" #f.a = 1# sort of > thing - unless I'm mischannelling (?) again. Agreed. > 2. Reference the thread/idea a little while back that ended with #def > f(a,b) having (publish=1)# - it's certainly no *worse* than the > proposals in the PEP! (Michael Hudson got as far as a patch, I think). Sure, reference it. It will never be added while I'm in charge though. --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Wed Feb 21 23:30:54 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 17:30:54 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: 
                              
                              References: <20010221232938.O26620@xs4all.nl> 
                              
                              Message-ID: <14996.16798.393875.480264@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "MH" == Mark Hammond 
                              
                              writes: MH> [Thomas W] >> appologize) to reduce the impact of the incompatibility. I do not >> believe the ability to leave out the default-argument-hack (if >> you don't use import-*/exec in the same function) is worth all >> that. MH> Ironically, I _fixed_ my original problem by _adding_ a MH> default-argument-hack. This meant my lambda no longer used a MH> global name but a local one. MH> Well, I think it ironic anyway :) I think it's ironic, too! I laughed when I read your message. MH> For the record, the only reason I had to use exec in that case MH> was because the "new" module is not capable creating a new MH> method. Trying to compile a block of code with a "return" MH> statement but no function decl (to create a code object suitable MH> for a method) fails at compile time. For the record, I realize that there is no reason for the compiler to complain about the code you wrote. If exec supplies an explicit namespace, then everything is hunky-dory. Assuming Guido agrees, I'll fix this ASAP. Jeremy From jeremy at alum.mit.edu Wed Feb 21 23:32:59 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 17:32:59 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14996.19370.133024.802787@beluga.mojam.com> References: 
                              
                              <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> <14996.10912.667104.603750@beluga.mojam.com> <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> <14996.19370.133024.802787@beluga.mojam.com> Message-ID: <14996.16923.805683.428420@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro 
                              
                              writes: Jeremy> The question, then, is whether some amount of incompatible Jeremy> change is acceptable in the 2.1 release. SM> I think of 2.1 as a minor release. Minor releases generally SM> equate in my mind with bug fixes, not significant functionality SM> changes or potential compatibility problems. I think many other SM> people feel the same way. Fair enough. It sounds like you are concerned, on general grounds, about incompatible changes and the specific exec/import issues aren't any more or less important than the other compatibility issues. I don't think I agree with you, but I'll sit on it for a few days and see what real problem reports there are. thinking-there-will-be-lots-to-talk-about-at-the-conference-ly y'rs, Jeremy From tim.one at home.com Thu Feb 22 01:58:34 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 21 Feb 2001 19:58:34 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <002301c09c30$46a89330$e46940d5@hagrid> Message-ID: 
                              
                              [/F] > Is it time to shut down python-dev? (yes, I'm serious) I can't imagine that it would be possible to have such a vigorous and focused debate about Python development in the absence of Python-Dev. That is, this is exactly the kind of thing for which Python-Dev is *most* needed! People disagreeing isn't exactly a new phenomenon ... From tim.one at home.com Thu Feb 22 02:02:37 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 21 Feb 2001 20:02:37 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102211854.TAA12664@core.inf.ethz.ch> Message-ID: 
                              
                              BTW, are people similarly opposed to that comparisons can now raise exceptions? It's been mentioned a few times on c.l.py this week, but apparently not (yet) by people who bumped into it in practice. From guido at digicool.com Thu Feb 22 02:28:31 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:28:31 -0500 Subject: [Python-Dev] Re: dl module In-Reply-To: Your message of "Wed, 21 Feb 2001 15:02:33 EST." 
                              
                              References: 
                              
                              Message-ID: <200102220128.UAA12546@cj20424-a.reston1.va.home.com> > On 10 Feb, GvR quoted and wrote: > >> Skip Montanaro writes: > >> > MAL> The same could be done for e.g. soundex ... > >> > >> Fred Drake wrote: > >> Given that Skip has published this module and that the C version can > >> always be retrieved from CVS if anyone really wants it, and that > >> soundex has been listed in the "Obsolete Modules" section in the > >> documentation for quite some time, this is probably a good time to > >> remove it from the source distribution. > > > >Yes, go ahead. > > Guido, did you mean go ahead and remove soundex, or the dl module, or > both? Soundex. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 02:30:37 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:30:37 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 21:17:56 +0100." <00cf01c09c43$60e360f0$e000a8c0@thomasnotebook> References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> <20010221143323.B1441@ute.cnri.reston.va.us> <00cf01c09c43$60e360f0$e000a8c0@thomasnotebook> Message-ID: <200102220130.UAA12562@cj20424-a.reston1.va.home.com> > I would consider the type/class split, making something > like ExtensionClass neccessary, much more annoying for > the advanced programmer. IMHO more efforts should go > into this issue _even before_ p3000. Yes, indeed. This will be on the agenda for Python 2.2. Digital Creations really wants PythonLabs to work on this issue! --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 02:36:29 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:36:29 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: Your message of "Wed, 21 Feb 2001 13:48:28 PST." <3A9437AC.4B2C77E7@ActiveState.com> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> Message-ID: <200102220136.UAA12628@cj20424-a.reston1.va.home.com> > We can have the discussion now, then. In my opinion it is irresponsible > to knowingly unleash backwards incompatibilities on the world with no > warning. If people think Python is unstable it will negatively impact > its growth much more than the delay of some esoteric features. Let me > put the ball back in your court: You should be talking, Mr. 8-bit-strings-should-always-be-considered- Latin-1. ;-) > Is the benefit provided by having nested scopes this year rather than > next year worth the pain of howls of outrage in Python-land. If we give > people a year to upgrade (with warning messages) they will (rightly) > grumble but not scream. But people *do* have a year's warning. Most people probably wait that much before they upgrade. (Half jokingly, half annoyed. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 02:42:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:42:11 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 23:29:38 +0100." <20010221232938.O26620@xs4all.nl> References: <14995.8522.253084.230222@beluga.mojam.com> 
                              
                              <20010220222936.A2477@newcnri.cnri.reston.va.us> <20010221074710.E13911@xs4all.nl> <14995.55080.928806.56317@w221.z064000254.bwi-md.dsl.cnc.net> <20010221232938.O26620@xs4all.nl> Message-ID: <200102220142.UAA12670@cj20424-a.reston1.va.home.com> > On Wed, Feb 21, 2001 at 09:56:40AM -0500, Jeremy Hylton wrote: > > > A note of clarification seems important here: The restrictions are > > not being introduced to simplify the implementation. They're being > > introduced because there is no sensible meaning for code that uses > > import * and nested scopes with free variables. There are two > > possible meanings, each plausible and neither satisfying. > > I disagree. There are several ways to work around them, or the BDFL could > just make a decision on what it should mean. Since import * is already illegal according to the reference manual, that's an easy call: I pronounce that it's illegal. For b/w compatibility we'll try to allow it in as many situations as possible where it's not ambiguous. > I don't *just* object to the backwards incompatibility, but also to the > added complexity and the strange special cases, most of which were > introduced (at my urging, I'll readily admit and for which I should and do > appologize) to reduce the impact of the incompatibility. I do not believe > the ability to leave out the default-argument-hack (if you don't use > import-*/exec in the same function) is worth all that. The strange special cases should not remain a permanent wart in the language; rather, import * in functions should be considered deprecated. In 2.2 we should issue a warning for this in most cases. (Is there as much as a hassle with exec? IMO exec without an in-clause should also be deprecated.) --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 02:45:10 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:45:10 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: Your message of "Wed, 21 Feb 2001 23:47:22 +0100." <20010221234722.C26647@xs4all.nl> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> <20010221174141.B25792@ute.cnri.reston.va.us> <20010221234722.C26647@xs4all.nl> Message-ID: <200102220145.UAA12690@cj20424-a.reston1.va.home.com> > On Wed, Feb 21, 2001 at 05:41:41PM -0500, Andrew Kuchling wrote: > > > Compatibility... ay, there's the rub! > > If you include 'ways of thinking' in 'compatibility', I'll agree. Many > people are used to being able to use exec/from-foo-import-*, and consider it > part of Python's wonderful flexibility and straightforwardness (I know I do, > and all my python-proficient and python-learning colleagues do.) Actually, I've always considered 'exec' mostly one of those must-have- because-the-competition-has-it features. Language theorists love it. In practice, bare exec not that useful; a more restricted form (e.g. one that always requires the caller to explicitly pass in an environment) makes much more sense. As for import *, we all know that it's an abomination... --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 02:46:35 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:46:35 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Thu, 22 Feb 2001 09:55:34 +1100." 
                              
                              References: 
                              
                              Message-ID: <200102220146.UAA12705@cj20424-a.reston1.va.home.com> > For the record, the only reason I had to use exec in that case was because > the "new" module is not capable creating a new method. Trying to compile a > block of code with a "return" statement but no function decl (to create a > code object suitable for a method) fails at compile time. I don't understand. Methods do have a function declaration: class C: def meth(self): pass Or am I misunderstanding? --Guido van Rossum (home page: http://www.python.org/~guido/) From MarkH at ActiveState.com Thu Feb 22 03:02:28 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Thu, 22 Feb 2001 13:02:28 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102220146.UAA12705@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Guido] > I don't understand. Methods do have a function declaration: > > class C: > > def meth(self): > pass > > Or am I misunderstanding? The problem is I have a class object, and the source-code for the method body as a string, generated at runtime based on runtime info from the reflection capabilities of the system we are interfacing to. The simplest example is for method code of "return None". I dont know how to get a code object for this snippet so I can use the new module to get a new method object. Attempting to compile this string gives a syntax error. There was some discussion a few years ago that adding "function" as a "compile type" may be an option, but I never progressed it. So my solution is to create a larger string that includes the method declaration, like: """def foo(self): return None """ exec that, get the function object out of the exec'd namespace and inject it into the class. Mark. From guido at digicool.com Thu Feb 22 03:07:49 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 21:07:49 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Thu, 22 Feb 2001 13:02:28 +1100." 
                              
                              References: 
                              
                              Message-ID: <200102220207.VAA12996@cj20424-a.reston1.va.home.com> > [Guido] > > > I don't understand. Methods do have a function declaration: > > > > class C: > > > > def meth(self): > > pass > > > > Or am I misunderstanding? [Mark] > The problem is I have a class object, and the source-code for the method > body as a string, generated at runtime based on runtime info from the > reflection capabilities of the system we are interfacing to. The simplest > example is for method code of "return None". > > I dont know how to get a code object for this snippet so I can use the new > module to get a new method object. Attempting to compile this string gives > a syntax error. There was some discussion a few years ago that adding > "function" as a "compile type" may be an option, but I never progressed it. > > So my solution is to create a larger string that includes the method > declaration, like: > > """def foo(self): > return None > """ > > exec that, get the function object out of the exec'd namespace and inject it > into the class. Aha, I see. That's how I would have done it too. I admit that it's attractive to exec this in the local namespace and then simply use the local variable 'foo', but that doesn't quite work, so 'exec...in...' is the right thing to do anyway. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 03:11:51 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 21:11:51 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 19:54:06 +0100." <200102211854.TAA12664@core.inf.ethz.ch> References: <200102211854.TAA12664@core.inf.ethz.ch> Message-ID: <200102220211.VAA13014@cj20424-a.reston1.va.home.com> > I should admit that I like the idea of nested scopes, because I like functional > programming style, but I don't know whether this returning 3 is nice ;)? > > def f(): > def g(): > return y > # put as many innoncent code lines as you like > y=3 > return g() This is a red herring; I don't see how this differs from the confusion in def f(): print y # lots of code y = 3 and I don't see how nested scopes add a new twist to this known issue. > It really seems that there's not been enough discussion about the change, Maybe, > and I think that is also ok to honestely be worried about what user > will feel about this? (and we can only think about this beacuse > the feedback is not that much) FUD. > Will this code breakage "scare" them and slow down migration to new versions > of python? They are already afraid of going 2.0(?). It is maybe just PR matter > but ... More FUD. > The *point* is that we are not going from version 0.8 to version 0.9 > of our toy research lisp dialect, passing from dynamic scoping to lexical > scoping. (Yes, I think, that changing semantic behind the scene is not > a polite move.) Well, I'm actually glad to hear this -- Python now has such a large user base that language changes are deemed impractical. > We really need the BDFL proposing the right thing. We'll discuss this more at the PythonLabs group meeting. For now, I prefer to move forward with nested scopes, breaking code and all. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 03:24:31 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 21:24:31 -0500 Subject: [Python-Dev] Strange import behaviour, recently introduced In-Reply-To: Your message of "Wed, 21 Feb 2001 17:39:09 +0100." <036b01c09c24$d0aa20a0$e000a8c0@thomasnotebook> References: <20010221150634.AB6ED371690@snelboot.oratrix.nl> <036b01c09c24$d0aa20a0$e000a8c0@thomasnotebook> Message-ID: <200102220224.VAA13210@cj20424-a.reston1.va.home.com> > Jack Jansen wrote: > > This week I noticed that these resource imports have suddenly > > become very very slow. Whereas startup time of my application used > > to be around 2 seconds (where the non-frozen version took 6 > > seconds) it now takes almost 20 times as long. The non-frozen > > version still takes 6 seconds. [Thomas Heller] > The most recent version calls PyImport_ImportModuleEx() for > '__builtin__' for every import of __builtin__ without caching the > result in a static variable. > > Can this be the cause? Would this help? *** import.c 2001/02/20 21:43:24 2.162 --- import.c 2001/02/22 02:24:55 *************** *** 1873,1878 **** --- 1873,1879 ---- { static PyObject *silly_list = NULL; static PyObject *builtins_str = NULL; + static PyObject *builtin_str = NULL; static PyObject *import_str = NULL; PyObject *globals = NULL; PyObject *import = NULL; *************** *** 1887,1892 **** --- 1888,1896 ---- builtins_str = PyString_InternFromString("__builtins__"); if (builtins_str == NULL) return NULL; + builtin_str = PyString_InternFromString("__builtin__"); + if (builtin_str == NULL) + return NULL; silly_list = Py_BuildValue("[s]", "__doc__"); if (silly_list == NULL) return NULL; *************** *** 1902,1913 **** } else { /* No globals -- use standard builtins, and fake globals */ PyErr_Clear(); ! builtins = PyImport_ImportModuleEx("__builtin__", ! NULL, NULL, NULL); if (builtins == NULL) return NULL; globals = Py_BuildValue("{OO}", builtins_str, builtins); if (globals == NULL) goto err; --- 1906,1918 ---- } else { /* No globals -- use standard builtins, and fake globals */ + PyInterpreterState *interp = PyThreadState_Get()->interp; PyErr_Clear(); ! builtins = PyDict_GetItem(interp->modules, builtin_str); if (builtins == NULL) return NULL; + Py_INCREF(builtins); globals = Py_BuildValue("{OO}", builtins_str, builtins); if (globals == NULL) goto err; --Guido van Rossum (home page: http://www.python.org/~guido/) From thomas at xs4all.net Thu Feb 22 09:00:47 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Thu, 22 Feb 2001 09:00:47 +0100 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <200102220145.UAA12690@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Wed, Feb 21, 2001 at 08:45:10PM -0500 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> <20010221174141.B25792@ute.cnri.reston.va.us> <20010221234722.C26647@xs4all.nl> <200102220145.UAA12690@cj20424-a.reston1.va.home.com> Message-ID: <20010222090047.P26620@xs4all.nl> On Wed, Feb 21, 2001 at 08:45:10PM -0500, Guido van Rossum wrote: > > On Wed, Feb 21, 2001 at 05:41:41PM -0500, Andrew Kuchling wrote: > Actually, I've always considered 'exec' mostly one of those must-have- > because-the-competition-has-it features. Language theorists love it. > In practice, bare exec not that useful; a more restricted form > (e.g. one that always requires the caller to explicitly pass in an > environment) makes much more sense. > As for import *, we all know that it's an abomination... Okay, I can live with that, but can we please have at least one release between "these are cool features and we use them in the std. library ourselves" and "no no you bad boy!" ? Or fork Python 3.0, move nested scopes to that, and release it parallel to 2.1 ? -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tony at lsl.co.uk Thu Feb 22 10:02:51 2001 From: tony at lsl.co.uk (Tony J Ibbs (Tibs)) Date: Thu, 22 Feb 2001 09:02:51 -0000 Subject: [Python-Dev] RE: Update to PEP 232 In-Reply-To: <200102220012.TAA12047@cj20424-a.reston1.va.home.com> Message-ID: <001b01c09cae$3c3fa360$f05aa8c0@lslp7o.int.lsl.co.uk> Guido responded to my points thus: > > 1. Clarify the final statement - I seem to have the > > impression (sorry, can't find a message to back it up) > > that either the BDFL or Tim Peters is very against > > anything other than the "simple" #f.a = 1# sort of > > thing - unless I'm mischannelling (?) again. > > Agreed. That's a relief - I obviously had "heard" right! > > 2. Reference the thread/idea a little while back that ended > > with #def > f(a,b) having (publish=1)# ... > > Sure, reference it. It will never be added while I'm in charge > though. Well, I'd kind of assumed that, given my "memory" of the first point. But of the schemes that won't be adopted, that's the one *I* preferred. (my own sense of "locality" means that I would prefer to be placing function attributes near the declaration of the function, especially given my penchant for long docstrings which move the end of the function off-screen. But then I haven't *used* them yet, and I assume this sort of point has been taken into account. And anyway I definitely prefer your sense of language design to mine). Keep on trying not to get run over by buses, and thanks again for the neat language, Tibs -- Tony J Ibbs (Tibs) http://www.tibsnjoan.co.uk/ "Bounce with the bunny. Strut with the duck. Spin with the chickens now - CLUCK CLUCK CLUCK!" BARNYARD DANCE! by Sandra Boynton My views! Mine! Mine! (Unless Laser-Scan ask nicely to borrow them.) From fredrik at effbot.org Thu Feb 22 11:18:21 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 22 Feb 2001 11:18:21 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: 
                              
                              Message-ID: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> Tim wrote: > [/F] > > Is it time to shut down python-dev? (yes, I'm serious) > > I can't imagine that it would be possible to have such a vigorous and > focused debate about Python development in the absence of Python-Dev. If a debate doesn't lead anywhere, it's just a waste of time. Code monkey contributions can be handled via sourceforge, and general whining works just as well on comp.lang.python. ::: Donning my devil's advocate suite, here are some recent observations: - Important decisions are made on internal PythonLabs meetings (unit testing, the scope issue, etc), not by an organized python- dev process. Does anyone care about -1 and +1's anymore? - The PEP process isn't working ("I updated the PEP and checked in the code", "but *that* PEP doesn't apply to *me*", etc). - Impressive hacks are more important than concerns from people who make their living selling Python technology (rather than a specific application). Codewise, nested scopes are amazing. From a marketing perspective, it's a disaster. (even more absurd allegations snipped) Am I entirely wrong? Cheers /F From fredrik at effbot.org Thu Feb 22 10:48:49 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 22 Feb 2001 10:48:49 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: 
                              
                              Message-ID: <029901c09cbb$a31cb980$e46940d5@hagrid> > BTW, are people similarly opposed to that comparisons can now raise > exceptions? It's been mentioned a few times on c.l.py this week, but > apparently not (yet) by people who bumped into it in practice. but that's not a new thing in 2.1, is it? Python 1.5.2 (#0, May 9 2000, 14:04:03) [MSC 32 bit (Intel)] on win32 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> class spam: ... def __cmp__(self, other): ... raise "Hi tim!" ... >>> a = [spam(), spam(), spam()] >>> a.sort() Traceback (innermost last): File "
                              
                              ", line 1, in ? File "
                              
                              ", line 3, in __cmp__ Hi tim! Cheers /F From fredrik at effbot.org Thu Feb 22 11:38:45 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 22 Feb 2001 11:38:45 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102211854.TAA12664@core.inf.ethz.ch> <200102220211.VAA13014@cj20424-a.reston1.va.home.com> Message-ID: <029d01c09cbb$a44fe250$e46940d5@hagrid> Guido van Rossum wrote: > > and I think that is also ok to honestely be worried about what user > > will feel about this? (and we can only think about this beacuse > > the feedback is not that much) > > FUD. > > > Will this code breakage "scare" them and slow down migration to new versions > > of python? They are already afraid of going 2.0(?). It is maybe just PR matter > > but ... > > More FUD. but FUD is what we have to deal with on the market. I know from my 2.0 experiences that lots of people are concerned about even small changes (more ways to do it isn't always what a large organization wants). Pointing out that "hey, it's a major release" or "you can ignore the new features, and pretend it's just a better 1.5.2" helps a little bit, but the scepticism is still there. And here we have something that breaks code, breaks tools, breaks training material, and breaks books. "Everything you know about Python scoping is wrong. Get over it". The more I think about it, the less I think it belongs in any version before 3.0. Cheers /F From fredrik at effbot.org Thu Feb 22 11:40:29 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 22 Feb 2001 11:40:29 +0100 Subject: [Python-Dev] Backwards Incompatibility References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> <20010221174141.B25792@ute.cnri.reston.va.us> <20010221234722.C26647@xs4all.nl> <200102220145.UAA12690@cj20424-a.reston1.va.home.com> <20010222090047.P26620@xs4all.nl> Message-ID: <02b201c09cbc$2a266d40$e46940d5@hagrid> Thomas wrote: > Okay, I can live with that, but can we please have at least one release > between "these are cool features and we use them in the std. library > ourselves" and "no no you bad boy!" ? Or fork Python 3.0, move nested > scopes to that, and release it parallel to 2.1 ? hey, that would mean that we can once again release two versions on the same day! (or why not three: 1.6.1, 2.1, and 3.0! ;-) Cheers /F From mal at lemburg.com Thu Feb 22 12:21:33 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 22 Feb 2001 12:21:33 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: 
                              
                              <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> Message-ID: <3A94F63D.25FF8595@lemburg.com> Fredrik Lundh wrote: > > Tim wrote: > > > [/F] > > > Is it time to shut down python-dev? (yes, I'm serious) > > > > I can't imagine that it would be possible to have such a vigorous and > > focused debate about Python development in the absence of Python-Dev. > > If a debate doesn't lead anywhere, it's just a waste of time. > > Code monkey contributions can be handled via sourceforge, > and general whining works just as well on comp.lang.python. Na, Fredrik, we wouldn't want to lose our nice little chat room -- it's way too much fun around here :-) > ::: > > Donning my devil's advocate suite, here are some recent observations: > > - Important decisions are made on internal PythonLabs meetings > (unit testing, the scope issue, etc), not by an organized python- > dev process. Does anyone care about -1 and +1's anymore? Well, being one of the first opponents of nested scopes (nobody else seemed to care back then...) and seeing how many of those other obscure PEPs made their way into the core, I have similar feelings. Still, I see the voting system as being a democratic method of reaching consensus: if there only one -1 and half a dozen +1s then I am overruled. > - The PEP process isn't working ("I updated the PEP and checked > in the code", "but *that* PEP doesn't apply to *me*", etc). Aren't PEPs meant to store information gathered in ongoing discussions rather than being an official statement of consent ? > - Impressive hacks are more important than concerns from people > who make their living selling Python technology (rather than a > specific application). Codewise, nested scopes are amazing. > From a marketing perspective, it's a disaster. Agreed and I have never understood why getting lambdas to work without keyword hacks is motivation enough to break code in all kinds of places. The nested scopes thingie started out as simple idea, but has in time grown so many special cases that I think the idea has already proven all by itself that it is the wrong approach to the problem (if there ever was a problem -- lambdas are certainly not newbie style gadgets). -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Pages: http://www.lemburg.com/python/ From guido at digicool.com Thu Feb 22 14:13:00 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 08:13:00 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: Your message of "Thu, 22 Feb 2001 09:00:47 +0100." <20010222090047.P26620@xs4all.nl> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> <20010221174141.B25792@ute.cnri.reston.va.us> <20010221234722.C26647@xs4all.nl> <200102220145.UAA12690@cj20424-a.reston1.va.home.com> <20010222090047.P26620@xs4all.nl> Message-ID: <200102221313.IAA15384@cj20424-a.reston1.va.home.com> > > As for import *, we all know that it's an abomination... > > Okay, I can live with that, but can we please have at least one release > between "these are cool features and we use them in the std. library > ourselves" and "no no you bad boy!" ? Or fork Python 3.0, move nested scopes > to that, and release it parallel to 2.1 ? Of course. We're not making it illegal yet, except in some highly specific circumstances where IMO the goal justifies the means. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 14:15:36 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 08:15:36 -0500 Subject: [Python-Dev] again on nested scopes and Backwards Incompatibility In-Reply-To: Your message of "Thu, 22 Feb 2001 00:25:15 +0100." <200102212325.AAA20597@core.inf.ethz.ch> References: <200102212325.AAA20597@core.inf.ethz.ch> Message-ID: <200102221315.IAA15405@cj20424-a.reston1.va.home.com> > PS: sorry for my abuse of we given that I'm jython devel not a python one, > but it is already difficult so... I feel I'm missing something about > this group dynamics. Hey Samuele, don't worry about the group dynamics. You're doing fine, and the group will survive. We've had heated debates before, and we've always come out for the better. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 14:20:01 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 08:20:01 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 20:02:37 EST." 
                              
                              References: 
                              
                              Message-ID: <200102221320.IAA15469@cj20424-a.reston1.va.home.com> > BTW, are people similarly opposed to that comparisons can now raise > exceptions? It's been mentioned a few times on c.l.py this week, but > apparently not (yet) by people who bumped into it in practice. That's not exactly news though, is it? Comparisons have been raising exceptions since, oh, Python 1.4 at least. --Guido van Rossum (home page: http://www.python.org/~guido/) From pedroni at inf.ethz.ch Thu Feb 22 14:22:25 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Thu, 22 Feb 2001 14:22:25 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102221322.OAA07627@core.inf.ethz.ch> Hi. I have learned that I should not play diplomacy between people that make money out of software. I partecipated to the discussion for two reasons: - I want to avoid an ugly to implement solution (I'm the guy that should code nested scopes in jython) - I got annoyed by Jeremy using his "position" and (your) BDFL decisions and the fact that code is already in, in order to avoid to be completely intellectually honest wrt to his creature. (But CLEARLY this was just my feeling, and getting annoyed is a feeling too) > > > I should admit that I like the idea of nested scopes, because I like functional > > programming style, but I don't know whether this returning 3 is nice ;)? > > > > def f(): > > def g(): > > return y > > # put as many innoncent code lines as you like > > y=3 > > return g() > This works. > This is a red herring; I don't see how this differs from the confusion > in > > def f(): > print y > # lots of code > y = 3 > > and I don't see how nested scopes add a new twist to this known issue. > This raises an error (at least at runtime). But yes it is just matter of taste and readability, mostly personal stuff. And on the long run maybe the second should raise a compile-time error (your choice). > > and I think that is also ok to honestely be worried about what user > > will feel about this? (and we can only think about this beacuse > > the feedback is not that much) > > FUD. > > > Will this code breakage "scare" them and slow down migration to new versions > > of python? They are already afraid of going 2.0(?). It is maybe just PR matter > > but ... > > More FUD. > Hey, I don't make money out of python or jython. I not invoked FUD, I was just pointing out what - I thought - was behind the discussion. FUD is already among us but you and the others make money with python, this is not the case for me. > > The *point* is that we are not going from version 0.8 to version 0.9 > > of our toy research lisp dialect, passing from dynamic scoping to lexical > > scoping. (Yes, I think, that changing semantic behind the scene is not > > a polite move.) > > Well, I'm actually glad to hear this -- Python now has such a large > user base that language changes are deemed impractical. > I'm just a newbie, I always read in books and e-articles: "python is a simple, elegant, consistent language, developed (slowly) with extremal care". It's all about being intellectually honest (yes this is my personal holy war): e.g. [GvR] > > I would consider the type/class split, making something > > like ExtensionClass neccessary, much more annoying for > > the advanced programmer. IMHO more efforts should go > > into this issue _even before_ p3000. > > Yes, indeed. This will be on the agenda for Python 2.2. Digital > Creations really wants PythonLabs to work on this issue! this is an honest statement. Things has changed (people are getting aware of this). With nested scope there were two possibilities: given the code: (I) y=1 def f(): y=666 def g(): return y one could go the way we are going and breaks this unless people fix it (II) y=1 def f(): y=666 def g(): global y return y or need some explicit syntax for the new behaviour: (III) y=1 def f(): nest y y=666 def g(): return y I agree designing solution (III) could be not simpler, and on the long run is just inelegant lossage (I can agree with this) up to other orthogonal issues (see above). Python is not closed source, it's your language, your user-base and you make money indirectly out of it: you are the BDFL and you can choose (if you would make money directly out of python maybe you must choose (III) or you are MS or Sun...) But I think it's clear that you should accept people (for their biz reason) saying "please can we go slower". And you can reply FUD... regards, Samuele Pedroni. PS: Yes I will not play this anymore. Lesson learned ;) From guido at digicool.com Thu Feb 22 14:28:27 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 08:28:27 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 17:13:46 CST." <14996.19370.133024.802787@beluga.mojam.com> References: 
                              
                              <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> <14996.10912.667104.603750@beluga.mojam.com> <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> <14996.19370.133024.802787@beluga.mojam.com> Message-ID: <200102221328.IAA15503@cj20424-a.reston1.va.home.com> > Jeremy> The question, then, is whether some amount of incompatible > Jeremy> change is acceptable in the 2.1 release. > > I think of 2.1 as a minor release. Minor releases generally equate in my > mind with bug fixes, not significant functionality changes or potential > compatibility problems. I think many other people feel the same way. Hm, I disagree. Remember, back in the days of Python 1.x, we introduced new stuff even with micro releases (1.5.2 had a lot of stuff that 1.5.1 didn't). My "feel" for Python version numbers these days is that the major number only needs to be bumped for very serious reasons. We switched to 2.0 mostly for PR reasons, and I hope we can stay at 2.x for a while. Pure bugfix releases will have a 3rd numbering level; in fact there will eventually be a 2.0.1 release that fixes bugs only (including the GPL incompatibility bug in the license!). 2.x versions can introduce new things. We'll do our best to keep old code from breaking unnecessarily, but I don't want our success to stand in the way of progress, and I will allow some things to break occasionally if it serves a valid purpose. You may consider this a break with tradition -- so be it. If 2.1 really breaks too much code, we will fix the policy for 2.2, and do our darndest to fix the code in 2.1.1. > Earlier this month I suggested that adopting a release numbering scheme > similar to that used for the Linux kernel would be appropriate. Please no! Unless you make a living hacking Linux kernels, it's too hard to remember which is odd and which is even, because it's too arbitrary. > Perhaps it's not so much the details of the numbering as the > up-front statement of something like, "version numbers like x.y > where y is even represent stable releases" or, "backwards > incompatibility will only be introduced when the major version > number is incremented". It's more that there is a statement about > stability vs new features that serves as a published committment the > user community can rely on. After all the changes that made it into > 2.0, I don't think anyone to have to address compatibility problems > with 2.1. I don't want to slide into version number inflation. There's not enough new in 2.1 to call it 3.0. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 14:51:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 08:51:03 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Thu, 22 Feb 2001 11:18:21 +0100." <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> References: 
                              
                              <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> Message-ID: <200102221351.IAA15568@cj20424-a.reston1.va.home.com> > Donning my devil's advocate suite, here are some recent observations: > > - Important decisions are made on internal PythonLabs meetings > (unit testing, the scope issue, etc), not by an organized python- > dev process. Does anyone care about -1 and +1's anymore? Python-dev is as organized as its participants want it to be. It appeared that very few people (apart from you) were interested in unit testing, so we looked elsewhere. We found that others inside Digital Creations had lots of experience with PyUnit and really liked it. Without arguments, +1 and -1's indeed don't have that much weight. With the right argument, a single +1 or -1 can be sufficient. Python is (still) not a democracy. > - The PEP process isn't working ("I updated the PEP and checked > in the code", "but *that* PEP doesn't apply to *me*", etc). I wouldn't say it isn't working. I believe it's very helpful to have a working document checked in somewhere to augment the discussion, and the PEPs make progress possible where in the past we went around in circles in the list without ever coming to a conclusion. Forcing the proposer of a new feature to write a PEP is a good way to think through more of the consequences of a new idea. Referring to a PEP when arguments are repeated can cut short discussion. Note that the PEP work flow document (PEP 1) explicitly states that the BDFL has the final word. But of course sometimes the realities of software development catch up with us -- we can't possibly hope to do all design ahead of all implementation, and during testing we may discover important new facts that must affect the design. > - Impressive hacks are more important than concerns from people > who make their living selling Python technology (rather than a > specific application). Codewise, nested scopes are amazing. > From a marketing perspective, it's a disaster. Aha, now we're talking. Python is growing up, and more and more people are making money by supporting it. Obviously, businesspeople have to be more conservative than software developers. But do you *really* think that breaking the occasional exec-without-in-clause or from-import-* will affect a large enough portion of the user population to make a difference? People with a lot at stake tend to be slow in upgrading anyway. So we're releasing 2.1 mostly for the bleeding edge consumers -- e.g. Paul Barret recently announced that his institute is upgrading to 2.0 and doesn't plan to switch to 2.1 any time soon. That's fine with me. Hey, here's an idea. We could add the warning API to 2.0.1 (it's backwards compatible AFAIK), and you can release PY201 with warnings added for things that your customers will need to change before they switch to PY21. --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik at effbot.org Thu Feb 22 15:55:33 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 22 Feb 2001 15:55:33 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: 
                              
                              <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> <14996.10912.667104.603750@beluga.mojam.com> <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> <14996.19370.133024.802787@beluga.mojam.com> <200102221328.IAA15503@cj20424-a.reston1.va.home.com> Message-ID: <04bb01c09cdf$85152750$e46940d5@hagrid> Guido wrote: > Hm, I disagree. Remember, back in the days of Python 1.x, we > introduced new stuff even with micro releases (1.5.2 had a lot of > stuff that 1.5.1 didn't). Last year, we upgraded a complex system from 1.2 to 1.5.2. Two modules broke; one didn't expect exceptions to be instances, and one messed up under the improved module cleanup model. We recently upgraded another major system from 1.5.2 to 2.0. It was a much larger undertaking; about 50 modules were affected. And six months after 2.0, we'll end up with yet another incompatible version... As a result, we end up with a lot more versions in active use, more support overhead, maintenance hell for extension writers (tried shipping a binary extension lately?), training headaches ("it works this way in 1.5.2 and 2.0 but this way in 2.1, but this works this way in 1.5.2 but this way in 2.0 and 2.1, and this works..."), and all our base are belong to cats. > 2.x versions can introduce new things. We'll do our best to keep > old code from breaking unnecessarily, but I don't want our success > to stand in the way of progress, and I will allow some things to > break occasionally if it serves a valid purpose. But nested scopes breaks everything: books (2.1 will appear at about the same time as the first batch of 2.0 books), training materials, gurus, tools, and as we've seen, some code. > I don't want to slide into version number inflation. There's not > enough new in 2.1 to call it 3.0. Besides nested scopes, that is. I'm just an FL, but I'd leave them out of a release that follows only 6 months after a major release, no matter what version number we're talking about. Leave the new compiler in, and use it to warn about import/exec (can it detect shadowing too?), but don't make the switch until everyone's ready. Cheers /F From nas at arctrix.com Thu Feb 22 16:14:37 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Thu, 22 Feb 2001 07:14:37 -0800 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102221351.IAA15568@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Feb 22, 2001 at 08:51:03AM -0500 References: 
                              
                              <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> <200102221351.IAA15568@cj20424-a.reston1.va.home.com> Message-ID: <20010222071437.A21075@glacier.fnational.com> On Thu, Feb 22, 2001 at 08:51:03AM -0500, Guido van Rossum wrote: > Hey, here's an idea. We could add the warning API to 2.0.1 (it's > backwards compatible AFAIK), and you can release PY201 with warnings > added for things that your customers will need to change before they > switch to PY21. This is a wonderful idea. Neil From thomas at xs4all.net Thu Feb 22 16:27:25 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Thu, 22 Feb 2001 16:27:25 +0100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102221351.IAA15568@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Feb 22, 2001 at 08:51:03AM -0500 References: 
                              
                              <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> <200102221351.IAA15568@cj20424-a.reston1.va.home.com> Message-ID: <20010222162725.A7486@xs4all.nl> On Thu, Feb 22, 2001 at 08:51:03AM -0500, Guido van Rossum wrote: > Hey, here's an idea. We could add the warning API to 2.0.1 (it's > backwards compatible AFAIK), and you can release PY201 with warnings > added for things that your customers will need to change before they > switch to PY21. Definately +1 on that. While on the subject: will all of 'from module import *' be deprecated, even at module level ? How should code like Mailman's mm_cfg.py/Defaults.py construct be rewritten to provide similar functionality ? Much as I dislike 'from module import *', it really does have its uses. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From pedroni at inf.ethz.ch Thu Feb 22 17:57:44 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Thu, 22 Feb 2001 17:57:44 +0100 (MET) Subject: [Python-Dev] a doc bug Message-ID: <200102221657.RAA13265@core.inf.ethz.ch> I don't know if someone was still aware of this but the tutorial in the development version of the doc still refers to the old scoping rules and refers to the old hack trick: http://python.sourceforge.net/devel-docs/tut/node6.html#SECTION006740000000000000000 Something to fix, in the case. regards. From loewis at informatik.hu-berlin.de Thu Feb 22 18:57:49 2001 From: loewis at informatik.hu-berlin.de (Martin von Loewis) Date: Thu, 22 Feb 2001 18:57:49 +0100 (MET) Subject: [Python-Dev] compile leaks memory. lots of memory. Message-ID: <200102221757.SAA17087@pandora> > It would be helpful to get some analysis on this known problem > before the beta release. It looks like there is a leak of symtable entries. In particular, symtable_enter_scope has if (st->st_cur) { prev = st->st_cur; if (PyList_Append(st->st_stack, (PyObject *)st->st_cur) < 0) { Py_DECREF(st->st_cur); st->st_errors++; return; } } st->st_cur = (PySymtableEntryObject *)\ PySymtableEntry_New(st, name, type, lineno); if (strcmp(name, TOP) == 0) st->st_global = st->st_cur->ste_symbols; if (prev) if (PyList_Append(prev->ste_children, (PyObject *)st->st_cur) < 0) st->st_errors++; First, it seems that Py_XDECREF(prev); is missing. That alone does not fix the leak, though, since prev is always null in the test case. The real problem comes from st_cur never being released, AFAICT. There is a DECREF in symtable_exit_scope, but that function is not called in the test case - symtable_enter_scope is called. For symmetry reasons, it appears that there should be a call to symtable_exit_scope of the global scope somewhere (which apparently is build in symtable_build). I can't figure how what the correct place for that call would be, though. Regards, Martin From guido at digicool.com Thu Feb 22 21:46:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 15:46:03 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Thu, 22 Feb 2001 16:27:25 +0100." <20010222162725.A7486@xs4all.nl> References: 
                              
                              <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> <200102221351.IAA15568@cj20424-a.reston1.va.home.com> <20010222162725.A7486@xs4all.nl> Message-ID: <200102222046.PAA16702@cj20424-a.reston1.va.home.com> > On Thu, Feb 22, 2001 at 08:51:03AM -0500, Guido van Rossum wrote: > > > Hey, here's an idea. We could add the warning API to 2.0.1 (it's > > backwards compatible AFAIK), and you can release PY201 with warnings > > added for things that your customers will need to change before they > > switch to PY21. > > Definately +1 on that. Hold on. Jeremy has an announcement to make. But he's probably still struggling home -- about 3-4 inches of snow (so far) were dumped on the DC area this afternoon. > While on the subject: will all of 'from module import *' be deprecated, even > at module level ? No, not at the module level. (There it is only frowned upon. :-) > How should code like Mailman's mm_cfg.py/Defaults.py > construct be rewritten to provide similar functionality ? Much as I dislike > 'from module import *', it really does have its uses. I have no idea what mm_cfg.py/Defaults.py is, but yes, import * has its uses! --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Thu Feb 22 22:01:02 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 22 Feb 2001 16:01:02 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <029901c09cbb$a31cb980$e46940d5@hagrid> Message-ID: 
                              
                              [tim] > BTW, are people similarly opposed to that comparisons can now raise > exceptions? It's been mentioned a few times on c.l.py this week, but > apparently not (yet) by people who bumped into it in practice. [/F] > but that's not a new thing in 2.1, is it? No, but each release raises cmp exceptions in cases it didn't the release before. If we were dead serious about "no backward incompatibility ever, no way no how", I'd expect arguments just as intense about that. So I conclude we're not dead serious about that. Which is good! But in a world without absolutes, there are no killer arguments either. From barry at digicool.com Thu Feb 22 22:24:32 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Thu, 22 Feb 2001 16:24:32 -0500 Subject: [Python-Dev] Those import related syntax errors again... References: 
                              
                              <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> <200102221351.IAA15568@cj20424-a.reston1.va.home.com> <20010222162725.A7486@xs4all.nl> <200102222046.PAA16702@cj20424-a.reston1.va.home.com> Message-ID: <14997.33680.580927.514329@anthem.wooz.org> >>>>> "GvR" == Guido van Rossum 
                              
                              writes: >> How should code like Mailman's mm_cfg.py/Defaults.py construct >> be rewritten to provide similar functionality ? Much as I >> dislike 'from module import *', it really does have its uses. GvR> I have no idea what mm_cfg.py/Defaults.py is, but yes, import GvR> * has its uses! Not that it's really that important to the discussion, but the way Mailman lets users override its defaults is by putting all the (autoconf and hardcoded) system defaults in Defaults.py, which the user is never supposed to touch. Then mm_cfg.py does a "from Defaults import *" -- at module level of course -- and users put any overridden values in mm_cfg.py. All Mailman modules that have to reference a system default do so by importing and using mm_cfg. This was Ken's idea, and a darn good one! It's got a wart or two, but they are quite minor. -Barry From fredrik at pythonware.com Thu Feb 22 22:40:09 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 22 Feb 2001 22:40:09 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: 
                              
                              <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> <200102221351.IAA15568@cj20424-a.reston1.va.home.com> <20010222162725.A7486@xs4all.nl> Message-ID: <070101c09d18$093c5a20$e46940d5@hagrid> Thomas wrote: > While on the subject: will all of 'from module import *' be deprecated, even > at module level ? hopefully not -- that would break tons of code, instead of just some... > How should code like Mailman's mm_cfg.py/Defaults.py construct be > rewritten to provide similar functionality ? Much as I dislike 'from module > import *', it really does have its uses. how about: # # mm_config.py class config: # defaults goes here spam = "spam" egg = "egg" # load user overrides import mm_cfg config.update(vars(mm_cfg)) # # some_module.py from mm_config import config print "breakfast:", config.spam, config.egg Cheers /F From tim.one at home.com Thu Feb 22 22:45:00 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 22 Feb 2001 16:45:00 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> Message-ID: 
                              
                              [/F] > If a debate doesn't lead anywhere, it's just a waste of time. If you end up being on the winning side, is it still a waste of time? If you end up being on the losing side of a debate, perhaps, sometimes. But I can't predict the future well enough to know the outcome in advance. > Donning my devil's advocate suite, here are some recent observations: > > - Important decisions are made on internal PythonLabs meetings > (unit testing, the scope issue, etc), not by an organized python- > dev process. Decisions are-- and were --made in Guido's head. Python-Dev was established to give him easier access to higher-quality input than was possible on c.l.py at the time, and to give Python developers a higher S/N place to hang out when discussing Python development. Internal PythonLabs meetings are really much the same, just on a smaller scale and with a higher-still S/N ratio. Both work for those purposes. It isn't-- and wasn't --the purpose of either to strip Guido of the last word. > Does anyone care about -1 and +1's anymore? Did anyone ever <0.5 wink>? A scattering of two-character arguments is interesting to get a quick feel, but even I wouldn't *decide* anything on that basis. If this were an ANSI/ISO committee, a single -1 would have absolute power -- and then we'd still be using Python 0.9.6 (ANSI/ISO committees need soul-crushingly boring and budget-bustingly expensive meetings regularly else consensus would never be reached on anything -- if people get to veto in their spare time while sitting at home, and without opponents blowing spit right in their face for the 18th time in 6 years, there's insufficient pressure *to* compromise). > - The PEP process isn't working ("I updated the PEP and checked > in the code", "but *that* PEP doesn't apply to *me*", etc). Need to define "working". I don't think it's what it should be yet, but is making progress. > - Impressive hacks are more important than concerns from people > who make their living selling Python technology (rather than a > specific application). Codewise, nested scopes are amazing. > From a marketing perspective, it's a disaster. Any marketing droid would believe that Python's current market is a fraction of its potential market, and so welcome any "new feature" that makes new sales easier. c.l.py is a microcosm of this battlefield, and the cry for nested scopes has continued unabated since the day lambda was introduced. I've never met a marketing type (and I've met more than my share ...) who wouldn't seize this as an opportunity to *expand* market share. Sales droids servicing existing accounts *may* grumble -- or the more inventive may take it as an opportunity to drive home the importance of their relationship to their customers ("it's us against them, and boy aren't you glad you've got Amalgamated Pythonistries on your side!"). > (even more absurd allegations snipped) With gratitude, and I'll skip even more absurd rationalizations 
                              
                              . > Am I entirely wrong? Of course not. The world isn't that simple. indeed-the-world-is-heavily-nested
                              
                              -ly y'rs - tim PS: At the internal PythonLabs mtg today, I voted against nested scopes. But also for them. Leaving that to Jeremy to explain. From greg at cosc.canterbury.ac.nz Fri Feb 23 00:21:58 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Fri, 23 Feb 2001 12:21:58 +1300 (NZDT) Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <200102220145.UAA12690@cj20424-a.reston1.va.home.com> Message-ID: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> Guido: > Language theorists love [exec]. Really? I'd have thought language theorists would be the ones who hate it, given all the problems it causes... Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From guido at digicool.com Fri Feb 23 00:26:05 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 18:26:05 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: Your message of "Fri, 23 Feb 2001 12:21:58 +1300." <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> References: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> Message-ID: <200102222326.SAA18443@cj20424-a.reston1.va.home.com> > Guido: > > > Language theorists love [exec]. > > Really? I'd have thought language theorists would be the ones > who hate it, given all the problems it causes... Depends on where they're coming from. Or maybe I should have said Lisp folks... --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Fri Feb 23 01:14:50 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Thu, 22 Feb 2001 19:14:50 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <200102222326.SAA18443@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Feb 22, 2001 at 06:26:05PM -0500 References: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> <200102222326.SAA18443@cj20424-a.reston1.va.home.com> Message-ID: <20010222191450.B15506@thyrsus.com> Guido van Rossum 
                              
                              : > > > Language theorists love [exec]. > > > > Really? I'd have thought language theorists would be the ones > > who hate it, given all the problems it causes... > > Depends on where they're coming from. Or maybe I should have said > Lisp folks... You are *so* right, Guido! :-) I almost commented about this in reply to Greg's post earlier. Crusty old LISP hackers like me tend to be really attached to being able to (a) lash up S-expressions that happen to be LISP function calls on the fly, and then (b) hand them to eval. "No separation between code and data" is one of the central dogmas of our old-time religion. In languages like Python that are sufficiently benighted to have a distinction between expression and statement syntax, we demand exec as well as eval and are likely to get seriously snotty about the language's completeness if exec is missing. Awkwardly, in such languages exec turns out to be much less useful in practice than it is in theory. In fact, Python has rather forced me to question whether "No separation between code and data" was as important a component of LISP's supernal wonderfulness as I believed when I was a fully fervent member of the cult. Anonymous lambdas are still key, though. ;-) And much cooler now that we have real lexical scoping. -- 
                              Eric S. Raymond I cannot undertake to lay my finger on that article of the Constitution which grant[s] a right to Congress of expending, on objects of benevolence, the money of their constituents. -- James Madison, 1794 From ping at lfw.org Fri Feb 23 03:37:05 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 22 Feb 2001 18:37:05 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? Message-ID: 
                              
                              Hi all -- i've been reading the enormous thread on nested scopes with some concern, since i would very much like Python to support "proper" lexical scoping, yet i also care about compatibility. There is something missing from my understanding here: - The model is, each environment has a pointer to the enclosing environment, right? - Whenever you can't find what you're looking for, you go up to the next level and keep looking, right? - So what's the issue with not being able to determine which variable binds in which scope? With the model just described, it's perfectly clear. Is all this breakage only caused by the particular optimizations for lookup in the implementation (fast locals, etc.)? Or have i missed something obvious? I could probably go examine the source code of the nested scoping changes to find the answer to my own question, but in case others share this confusion with me, i thought it would be worth asking. * * * Consider for a moment the following simple model of lookup: 1. A scope maps names to objects. 2. Each scope except the topmost also points to a parent scope. 3. To look up a name, first ask the current scope. 4. When lookup fails, go up to the parent scope and keep looking. I believe the above rules are common among many languages and are commonly understood. The only Python-specific parts are then: 5. The current scope is determined by the nearest enclosing 'def'. 6. These statements put a binding into the current scope: assignment (=), def, class, for, except, import And that's all. * * * Given this model, all of the scoping questions that have been raised have completely clear answers: Example I >>> y = 3 >>> def f(): ... print y ... >>> f() 3 Example II >>> y = 3 >>> def f(): ... print y ... y = 1 ... print y ... >>> f() 3 1 >>> y 3 Example III >>> y = 3 >>> def f(): ... exec "y = 2" ... def g(): ... return y ... return g() ... >>> f() 2 Example IV >>> m = open('foo.py', 'w') >>> m.write('x = 1') >>> m.close() >>> def f(): ... x = 3 ... from foo import * ... def g(): ... print x ... g() ... >>> f() 1 In Example II, the model addresses even the current situation that sometimes surprises new users of Python. Examples III and IV are the current issues of contention about nested scopes. * * * It's good to start with a simple model for the user to understand; the implementation can then do funky optimizations under the covers so long as the model is preserved. So for example, if the compiler sees that there is no "import *" or "exec" in a particular scope it can short-circuit the lookup of local variables using fast locals. But the ability of the compiler to make this optimization should only affect performance, not affect the Python language model. The model described above is the approximately the one available in Scheme. It exactly reflects the environment-diagram model of scoping as taught to most Scheme students and i would argue that it is the easiest to explain. Some implementations of Scheme, such as STk, do what is described above. UMB scheme does what Python does now: the use-before-binding of 'y' in Example II would cause an error. I was surprised that these gave different behaviours; it turns out that the Scheme standard actually forbids the use of internal defines not at the beginning of a function body, thus sidestepping the issue. But we can't do this in Python; assignment must be allowed anywhere. Given that internal assignment has to have some meaning, the above meaning makes the most sense to me. -- ?!ng From guido at digicool.com Fri Feb 23 03:59:26 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 21:59:26 -0500 Subject: [Python-Dev] Nested scopes resolution -- you can breathe again! In-Reply-To: Your message of "Thu, 22 Feb 2001 16:45:00 EST." 
                              
                              References: 
                              
                              Message-ID: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> We (PythonLabs) have received a lot of flak over our plan to introduce nested scopes despite the fact that it appears to break a small but significant amount of working code. We discussed this at an PythonLabs group meeting today. After the meeting, Tim posted this teaser: > PS: At the internal PythonLabs mtg today, I voted against nested > scopes. But also for them. Leaving that to Jeremy to explain. After the meeting Jeremy had a four hour commute home due to bad weather, so let me do the honors for him. (Jeremy will update the PEP, implement the feature, and update the documentation, in that order.) We have clearly underestimated how much code the nested scopes would break, but more importantly we have underestimated how much value our community places on stability. At the same time we really like nested scopes, and we would like to see the feature introduced at some point. So here's the deal: we'll make nested scopes an optional feature in 2.1, default off, selectable on a per-module basis using a mechanism that's slightly hackish but is guaranteed to be safe. (See below.) At the same time, we'll augment the compiler to detect all situations that will break when nested scopes are introduced in the future, and issue warnings for those situations. The idea here is that warnings don't break code, but encourage folks to fix their code so we can introduce nested scopes in 2.2. Given our current pace of releases that should be about 6 months warning. These warnings are *not* optional -- they are issued regardless of whether you select to use nested scopes. However there is a command line option (crudest form: -Wi) to disable warnings; there are also ways to disable them programmatically. If you want to make sure that you don't ignore the warnings, there's also a way to turn warnings into errors (-We from the command line). How do you select nested scopes? Tim suggested a mechanism that is used by the ANSI C committee to enable language features that are backwards incompatible: they trigger on the import of a specific previously non-existant header file. (E.g. after #include 
                              
                              , "imaginary" becomes a reserved word.) The Python equivalent of this is a magical import that is recognized by the compiler; this was also proposed by David Scherer for making integer division yield a float. (See http://mail.python.org/pipermail/edu-sig/2000-May/000499.html) You could say that Perl's "use" statement is similar. We haven't decided yet which magical import; two proposals are: import __nested_scopes__ from __future__ import nested_scopes The magical import only affects the source file in which it occurs. It is recognized by the compiler as it is scanning the source code. It must appear at the top-level (no "if" or "try" or "def" or anything else around it) and before any code that could be affected. We realize that PEP 5 specifies a one-year transition period. We believe that that is excessive in this case, and would like to change the PEP to be more flexible. (The PEP has questionable status -- it was never formally discussed.) We also believe that the magical import mechanism is useful enough to be reused for other situations like this; Tim will draft a PEP to describe in excruciating detail. I thank everybody who gave feedback on this issue. And thanks to Jeremy for implementing nested scopes! --Guido van Rossum (home page: http://www.python.org/~guido/) From ping at lfw.org Fri Feb 23 04:16:57 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 22 Feb 2001 19:16:57 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Thu, 22 Feb 2001, Ka-Ping Yee wrote: > - So what's the issue with not being able to determine > which variable binds in which scope? With the model > just described, it's perfectly clear. Is all this > breakage only caused by the particular optimizations > for lookup in the implementation (fast locals, etc.)? > Or have i missed something obvious? That was poorly phrased. To clarify, i am making the assumption that the compiler wants each name to be associated with exactly one scope per block in which it appears. 1. Is the assumption true? 2. If so, is this constraint motivated only by lookup optimization? 3. Why enforce this constraint when it would be inconsistent with behaviour that we already have at the top level? If foo.py contains "x = 1", then this works at the top level: >>> if 1: # top level ... x = 3 ... print x ... from foo import * ... def g(): print x ... g() ... 3 1 I am suggesting that it should do exactly the same thing in a function: >>> def f(): # x = 3 inside, no g() ... x = 3 ... print x ... from foo import * ... print x ... >>> f() 3 1 >>> def f(): # x = 3 inside, nested g() ... x = 3 ... print x ... from foo import * ... def g(): print x ... g() ... >>> f() 3 1 >>> x = 3 >>> def f(): # x = 3 outside, nested g() ... print x ... from foo import * ... def g(): print x ... g() ... >>> f() 3 1 (Replacing "from foo import *" above with "x = 1" or "exec('x = 1')" should make no difference. So this isn't just about internal-import-* and exec-without-in, even if we do eventually deprecate internal-import-* and exec-without-in -- which i would tend to support.) Here is a summary of the behaviour i observe and propose. 1.5.2 2.1a1 suggested top level from foo import * 3,1 3,1 3,1 exec('x = 1') 3,1 3,1 3,1 x = 1 3,1 3,1 3,1 x = 3 outside, no g() from foo import * 3,1 3,1 3,1 exec('x = 1') 3,1 3,1 3,1 x = 1 x UnboundLocal 3,1 x = 3 inside, no g() from foo import * 3,1 3,1 3,1 exec('x = 1') 3,1 3,1 3,1 x = 1 x UnboundLocal 3,1 x = 3 outside, nested g() from foo import * 3,3 SyntaxError 3,1 exec('x = 1') 3,3 SyntaxError 3,1 x = 1 x UnboundLocal 3,1 x = 3 inside, nested g() from foo import * 3,x SyntaxError 3,1 exec('x = 1') 3,x SyntaxError 3,1 x = 1 3,x 3,1 3,1 (I don't know what the heck is going on in Python 1.5.2 in the cases where it prints 'x'.) My postulates are: 1. "exec('x = 1')" should behave exactly the same as "x = 1" 2. "from foo import *" should do the same as "x = 1" 3. "def g(): print x" should behave the same as "print x" The testing script is attached. -- ?!ng -------------- next part -------------- import sys file = open('foo.py', 'w') file.write('x = 1') file.close() toplevel = """ x = 3 print x %s def g(): print x g() """ outside = """ x = 3 def f(): print x %s print x f() """ inside = """ x = 3 def f(): print x %s print x f() """ nestedoutside = """ x = 3 def f(): print x %s def g(): print x g() f() """ nestedinside = """ def f(): x = 3 print x %s def g(): print x g() f() """ for template in [toplevel, outside, inside, nestedoutside, nestedinside]: for statement in ["from foo import *", "exec('x = 1')", "x = 1"]: code = template % statement try: exec code in {} except: print sys.exc_value print From tim.one at home.com Fri Feb 23 04:22:54 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 22 Feb 2001 22:22:54 -0500 Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Ka-Ping Yee] > Hi all -- i've been reading the enormous thread on nested scopes > with some concern, since i would very much like Python to support > "proper" lexical scoping, yet i also care about compatibility. > > There is something missing from my understanding here: > > - The model is, each environment has a pointer to the > enclosing environment, right? The conceptual model, yes, but the implementation isn't like that. > - Whenever you can't find what you're looking for, you > go up to the next level and keep looking, right? Conceptually, yes. No such looping search occurs at runtime, though. > - So what's the issue with not being able to determine > which variable binds in which scope? That determination is done at compile-time, not runtime. In the presence of "exec" and "import *" in some contexts, compile-time determination is stymied and there is no runtime support for a "slow" lookup. Note that the restrictions are *not* against lexical nesting, they're against particular uses of "exec" and "import *" (the latter of which is so muddy the Ref Man said it was undefined a long, long time ago). > ... > It's good to start with a simple model for the user to understand; > the implementation can then do funky optimizations under the covers > so long as the model is preserved. Even locals used to be resolved by dict searches. The entire model there wasn't preserved by the old switch to fast locals either. For example, >>> def f(): ... global i ... exec "i=42\n" ... >>> i = 666 >>> f() >>> i 666 >>> IIRC, in the old days that would print 42. Who cares <0.1 wink>? This is nonsense either way. There are tradeoffs here among: conceptual clarity runtime efficiency implementation complexity rate of cyclic garbage creation Your message favors "conceptual clarity" over all else; the implementation doesn't. Python also limits strings to the size of a platform int <0.9 wink>. > ... > The model described above is the approximately the one available in > Scheme. But note that eval() didn't make it into the Scheme std: they couldn't agree on its semantics or implementation. eval() is *suggested* in the fifth Revised Report, but there has no access to its lexical environment; instead it acts "as if" its argument had appeared at top level "or in some other implementation-dependent environment" (Dybvig; "The Scheme Programming Language"). Dybvig gives an example of one of the competing Scheme eval() proposals gaining access to a local vrbl via using macros to interpolate the local's value into the argument's body before calling eval(). And that's where refusing to compromise leads. utterly-correct-and-virtually-useless-ly y'rs - tim From guido at digicool.com Fri Feb 23 04:31:36 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 22:31:36 -0500 Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: Your message of "Thu, 22 Feb 2001 18:37:05 PST." 
                              
                              References: 
                              
                              Message-ID: <200102230331.WAA21467@cj20424-a.reston1.va.home.com> > Hi all -- i've been reading the enormous thread on nested scopes > with some concern, since i would very much like Python to support > "proper" lexical scoping, yet i also care about compatibility. Note that this is moot now -- see my previous post about how we've decided to resolve this using a magical import to enable nested scopes (in 2.1). > There is something missing from my understanding here: > > - The model is, each environment has a pointer to the > enclosing environment, right? Actually, no. > - Whenever you can't find what you're looking for, you > go up to the next level and keep looking, right? That depends. Our model is inspired by the semantics of locals in Python 2.0 and before, and this all happens at compile time. That means that we must be able to know which names are defined in each scope at compile time. > - So what's the issue with not being able to determine > which variable binds in which scope? With the model > just described, it's perfectly clear. Is all this > breakage only caused by the particular optimizations > for lookup in the implementation (fast locals, etc.)? > Or have i missed something obvious? You call it an optimization, and that's how it started. But since it clearly affects the semantics of the language, it's not really an optimization -- it's a particular semantics that lends itself to more and easy compile-time analysis and hence can be implemented more efficiently, but the corner cases are different, and the language semantics define what should happen, optimization or not. In particular: x = 1 def f(): print x x = 2 raises an UnboundLocalError error at the point of the print statement. Likewise, in the official semantics of nested scopes: x = 1 def f(): def g(): print x g() x = 2 also raises an UnboundLocalError at the print statement. > I could probably go examine the source code of the nested scoping > changes to find the answer to my own question, but in case others > share this confusion with me, i thought it would be worth asking. No need to go to the source -- this is all clearly explained in the PEP (http://python.sourceforge.net/peps/pep-0227.html). > * * * > > Consider for a moment the following simple model of lookup: > > 1. A scope maps names to objects. > > 2. Each scope except the topmost also points to a parent scope. > > 3. To look up a name, first ask the current scope. > > 4. When lookup fails, go up to the parent scope and keep looking. > > I believe the above rules are common among many languages and are > commonly understood. Actually, most languages do all this at compile time. Very early Python versions did do all this at run time, but by the time 1.0 was released, the "locals are locals" rule was firmly in place. You may like the purely dynamic version better, but it's been outlawed long ago. > The only Python-specific parts are then: > > 5. The current scope is determined by the nearest enclosing 'def'. For most purposes, 'class' also creates a scope. > 6. These statements put a binding into the current scope: > assignment (=), def, class, for, except, import > > And that's all. Sure. > * * * > > Given this model, all of the scoping questions that have been > raised have completely clear answers: > > Example I > > >>> y = 3 > >>> def f(): > ... print y > ... > >>> f() > 3 Sure. > Example II > > >>> y = 3 > >>> def f(): > ... print y > ... y = 1 > ... print y > ... > >>> f() > 3 > 1 > >>> y > 3 You didn't try this, did you? or do you intend to say that it "should" print this? In fact it raises UnboundLocalError: local variable 'y' referenced before assignment. (Before 2.0 it would raise NameError.) > Example III > > >>> y = 3 > >>> def f(): > ... exec "y = 2" > ... def g(): > ... return y > ... return g() > ... > >>> f() > 2 Wrong again. This prints 3, both without and with nested scopes as defined in 2.1a2. However it raises an exception with the current CVS version: SyntaxError: f: exec or 'import *' makes names ambiguous in nested scope. > Example IV > > >>> m = open('foo.py', 'w') > >>> m.write('x = 1') > >>> m.close() > >>> def f(): > ... x = 3 > ... from foo import * > ... def g(): > ... print x > ... g() > ... > >>> f() > 1 I didn't try this one, but I'm sure that it prints 3 in 2.1a1 and raises the same SyntaxError as above with the current CVS version. > In Example II, the model addresses even the current situation > that sometimes surprises new users of Python. Examples III and IV > are the current issues of contention about nested scopes. > > * * * > > It's good to start with a simple model for the user to understand; > the implementation can then do funky optimizations under the covers > so long as the model is preserved. So for example, if the compiler > sees that there is no "import *" or "exec" in a particular scope it > can short-circuit the lookup of local variables using fast locals. > But the ability of the compiler to make this optimization should only > affect performance, not affect the Python language model. Too late. The semantics have been bent since 1.0 or before. The flow analysis needed to optimize this in such a way that the user can't tell whether this is optimized or not is too hard for the current compiler. The fully dynamic model also allows the user to play all sorts of stupid tricks. And the unoptimized code is so much slower that it's well worth to hve the optimization. > The model described above is the approximately the one available in > Scheme. It exactly reflects the environment-diagram model of scoping > as taught to most Scheme students and i would argue that it is the > easiest to explain. I don't know Scheme, but isn't it supposed to be a compiled language? > Some implementations of Scheme, such as STk, do what is described > above. UMB scheme does what Python does now: the use-before-binding > of 'y' in Example II would cause an error. I was surprised that > these gave different behaviours; it turns out that the Scheme > standard actually forbids the use of internal defines not at the > beginning of a function body, thus sidestepping the issue. I'm not sure how you can say that Scheme sidesteps the issue when you just quote an example where Scheme implementations differ? > But we > can't do this in Python; assignment must be allowed anywhere. > > Given that internal assignment has to have some meaning, the above > meaning makes the most sense to me. Sorry. Sometimes, reality bites. :-) Note that I want to take more of the dynamicism out of function bodies. The reference manual has for a long time outlawed import * inside functions (but the implementation didn't enforce this). I see no good reason to allow this (it's causing a lot of work to happen each time the function is called), and the needs of being able to clearly define what happens with nested scopes make it necessary to outlaw it. I also want to eventually completely outlaw exec without an 'in' clause inside a class or function, and access to local variables through locals() or vars(). I'm not sure yet about exec without an 'in' clause at the global level, but I'm tempted to think that even there it's not much use. We'll start with warnings for some of these cases in 2.1. I see that Tim posted another rebuttal, explaining better than I do here *why* Ping's "simple" model is not good for Python, so I'll stop now. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Fri Feb 23 04:36:08 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 22:36:08 -0500 Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: Your message of "Thu, 22 Feb 2001 19:16:57 PST." 
                              
                              References: 
                              
                              Message-ID: <200102230336.WAA21493@cj20424-a.reston1.va.home.com> > 1. "exec('x = 1')" should behave exactly the same as "x = 1" Sorry, no go. This just isn't a useful feature. > 2. "from foo import *" should do the same as "x = 1" But it is limiting because it hides information from the compiler, and hence it is outlawed when it gets in the way of the compiler. > 3. "def g(): print x" should behave the same as "print x" Huh? again. Defining a function does't call it. Python has always adhered to the principle that the context where a function is defined determines its context, not where it is called. --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Fri Feb 23 04:00:07 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 22 Feb 2001 22:00:07 -0500 (EST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: 
                              
                              References: 
                              
                              
                              Message-ID: <14997.53815.769191.239591@w221.z064000254.bwi-md.dsl.cnc.net> I think the issue that you didn't address is that lexical scoping is a compile-time issue, and that in most languages that variable names that a program uses are a static property of the code. Off the top of my head, I can't think of another lexically scoped language that allows an exec or eval to create a new variable binding that can later be used via a plain-old reference. One of the reasons I am strongly in favor of making import * and exec errors is that it stymies the efforts of a reader to understand the code. Lexical scoping is fairly clear because you can figure out what binding a reference will use by reading the text. (As opposed to dynamic scoping where you have to think about all the possible call stacks in which the function might end up.) With bare exec and import *, the reader of the code doesn't have any obvious indicator of what names are being bound. This is why I consider it bad form and presumably part of the reason that the language references outlaws it. (But not at the module scope, since practicality beats purity.) If we look at your examples: >>> def f(): # x = 3 inside, no g() ... x = 3 ... print x ... from foo import * ... print x ... >>> f() 3 1 >>> def f(): # x = 3 inside, nested g() ... x = 3 ... print x ... from foo import * ... def g(): print x ... g() ... >>> f() 3 1 >>> x = 3 >>> def f(): # x = 3 outside, nested g() ... print x ... from foo import * ... def g(): print x ... g() ... >>> f() 3 1 In these examples, it isn't at all obvious to the reader of the code whether the module foo contains a binding for x or whether the programmer intended to import that name and stomp on the exist definition. Another key difference between Scheme and Python is that in Scheme, each binding operation creates a new scope. The Scheme equivalent of this Python code -- def f(x): y = x + 1 ... y = x + 2 ... -- would presumably be something like this -- (define (f x) (let ((y (+ x 1))) ... (let (y (+ x 2))) ... )) Python is a whole different beast because it supports multiple assignments to a name within a single scope. In Scheme, every binding of a name via lambda introduces a new scope. This is the reason that the example -- x = 3 def f(): print x x = 2 print x -- raises an error rather than printing '3\n2\n'. Jeremy From jeremy at alum.mit.edu Fri Feb 23 04:15:39 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 22 Feb 2001 22:15:39 -0500 (EST) Subject: [Python-Dev] Nested scopes resolution -- you can breathe again! In-Reply-To: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> References: 
                              
                              <200102230259.VAA19238@cj20424-a.reston1.va.home.com> Message-ID: <14997.54747.56767.641188@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum 
                              
                              writes: GvR> The Python equivalent of this is a magical import that is GvR> recognized by the compiler; this was also proposed by David GvR> Scherer for making integer division yield a float. (See GvR> http://mail.python.org/pipermail/edu-sig/2000-May/000499.html) GvR> You could say that Perl's "use" statement is similar. GvR> We haven't decided yet which magical import; two proposals are: GvR> import __nested_scopes__ from __future__ import GvR> nested_scopes GvR> The magical import only affects the source file in which it GvR> occurs. It is recognized by the compiler as it is scanning the GvR> source code. It must appear at the top-level (no "if" or "try" GvR> or "def" or anything else around it) and before any code that GvR> could be affected. We'll need to write a short PEP describing this approach and offering some guidance about how frequently we intend to use it. I think few of us would be interested in making frequent use of it to add all sorts of variant language features. Rather, I imagine it would be used only -- or primarily -- to introduce new features that will become standard at some point. GvR> We also believe that the magical import mechanism is useful GvR> enough to be reused for other situations like this; Tim will GvR> draft a PEP to describe in excruciating detail. I'm happy to hear that Tim will draft this PEP. He didn't mention it at lunch today or I would have given him a big hug (or bought him a Coke). As Tim knows, I think the PEP needs to say something about whether these magic imports create name bindings and what objects are bound to the names. Will we need an __nested_scopes__.py in the Lib directory? Jeremy From barry at digicool.com Fri Feb 23 06:04:32 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 23 Feb 2001 00:04:32 -0500 Subject: [Python-Dev] compile leaks memory. lots of memory. References: <200102221757.SAA17087@pandora> Message-ID: <14997.61280.57003.582965@anthem.wooz.org> >>>>> "MvL" == Martin von Loewis 
                              
                              writes: MvL> The real problem comes from st_cur never being released, MvL> AFAICT. There is a DECREF in symtable_exit_scope, but that MvL> function is not called in the test case - MvL> symtable_enter_scope is called. For symmetry reasons, it MvL> appears that there should be a call to symtable_exit_scope of MvL> the global scope somewhere (which apparently is build in MvL> symtable_build). I can't figure how what the correct place MvL> for that call would be, though. Martin, I believe you've hit the nail on the head. My latest Insure run backs this theory up. It even claims that st_cur is lost by the de-allocation of st in PySymtable_Free(). I'm betting that Jeremy will be able to quickly figure out where the missing frees are when I send him the Insure report. -Barry From tim.one at home.com Fri Feb 23 06:30:27 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 00:30:27 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <14997.54747.56767.641188@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              [Guido] > We also believe that the magical import mechanism is useful > enough to be reused for other situations like this; Tim will > draft a PEP to describe in excruciating detail. [Jeremy Hylton] > ... > I'm happy to hear that Tim will draft this PEP. He didn't mention it > at lunch today or I would have given him a big hug (or bought him a > Coke). Guido's msg was the first I heard of it too. I think this is the same process by which I got assigned to change Windows imports: the issue came up, and I opened my mouth <-0.9 wink>. > As Tim knows, I think the PEP needs to say something about whether > these magic imports create name bindings and what objects are > bound to the names. > > Will we need an __nested_scopes__.py in the Lib directory? Offhand, I suggest to create a real Lib/__future__.py, and let import code get generated as always. The purpose of __future__.py is to record release info in an *obvious* place to look for it (BTW, best I can tell, sys.version isn't documented anywhere, so this serves that purpose too 
                              
                              ): ------------------------------------------------------------------ """__future__: Record of phased-in incompatible language changes. Each line is of the form: FeatureName = ReleaseInfo ReleaseInfo is a pair of the form: (OptionalRelease, MandatoryRelease) where, normally, OptionalRelease <= MandatoryRelease, and both are 5-tuples of the same form as sys.version_info: (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int PY_MINOR_VERSION, # the 1; an int PY_MICRO_VERSION, # the 0; an int PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string PY_RELEASE_SERIAL # the 3; an int ) In the case of MandatoryReleases that have not yet occurred, MandatoryRelease predicts the release in which the feature will become a permanent part of the language. Else MandatoryRelease records when the feature became a permanent part of the language; in releases at or after that, modules no longer need from __future__ import FeatureName to use the feature in question, but may continue to use such imports. In releases before OptionalRelease, an import from __future__ of FeatureName will raise an exception. MandatoryRelease may also be None, meaning that a planned feature got dropped. No line is ever to be deleted from this file. """ nested_scopes = (2, 1, 0, "beta", 1), (2, 2, 0, "final", 0) ----------------------------------------------------------------- While this is 100% intended to serve a documentation purpose, I also intend to use it in my own code, like so (none of which is special to the compiler except for the first line): from __future__ import nested_scopes import sys assert sys.version_info < nested_scopes[1], "delete this section!" # Note that the assert above also triggers if MandatoryRelease is None, # i.e. if the feature got dropped (under 2.1 rules, None is smaller than # anything else 
                              
                              ). del sys, nested_scopes Other rules: # Legal only at module scope, before all non-comment occurrences of # name, and only when name is known to the compiler. from __future__ import name # Ditto. name2 has no special meaning. from __future__ import name as name2 The purpose of the next two is to allow programmatic manipulation of the info in __future__.py (generate help msgs, build a histogram of adoption dates for incompatible changes by decade over the previous two centuries, whatever). # Legal anywhere, but no special meaning. import __future__ import __future__ as name From tim.one at home.com Fri Feb 23 06:34:19 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 00:34:19 -0500 Subject: [Python-Dev] Nested scopes resolution -- you can breathe again! In-Reply-To: <14997.54747.56767.641188@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              [Jeremy] > ... > I think few of us would be interested in making frequent use of it > to add all sorts of variant language features. Rather, I imagine > it would be used only -- or primarily -- to introduce new features > that will become standard at some point. In my view, __future__ is *only* for the latter. Somebody who wants to write a PEP for an analogous scheme keying off, say, __jerking_off__, is welcome to do so, but anything else would be a 2.2 PEP at best. from-__jerking_off__-import-curly_braces-ly y'rs - tim From tim.one at home.com Fri Feb 23 06:37:32 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 00:37:32 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: 
                              
                              Message-ID: 
                              
                              [TIm] >(BTW, best I can tell, sys.version isn't documented anywhere, so > this serves that purpose too 
                              
                              ). Wow. Averaging two errors per line! I meant sys.version_info, and it's documented in the obvious place. error-free-at-laat!-ly y'rs - itm From pf at artcom-gmbh.de Fri Feb 23 08:27:28 2001 From: pf at artcom-gmbh.de (Peter Funk) Date: Fri, 23 Feb 2001 08:27:28 +0100 (MET) Subject: [Python-Dev] Please use __progress__ instead of __future__ (was Other situations like this) In-Reply-To: 
                              
                              from Tim Peters at "Feb 23, 2001 0:30:27 am" Message-ID: 
                              
                              Hi, Tim Peters: [...] > Offhand, I suggest to create a real Lib/__future__.py, and let import code > get generated as always. The purpose of __future__.py is to record release > info in an *obvious* place to look for it [...] I believe __future__ is a bad name. What appears today as the bright shining future will be the distant dusty past of tomorrow. But the name of the module is not going to change anytime soon. right? Please call it __progress__ or __history__ or even __python_history__ or come up with some other name. What about __python_bloat__ ? 
                              
                              . In my experience of computing it is a really bad idea to call anything 'new', 'old', 'future', '2000' or some such because those names last much longer than you would have believed at the time the name was choosen. Regards, Peter -- Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260 office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen) From tim.one at home.com Fri Feb 23 09:24:48 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 03:24:48 -0500 Subject: [Python-Dev] RE: Please use __progress__ instead of __future__ (was Other situations like this) In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Peter Funk] > I believe __future__ is a bad name. What appears today as the bright > shining future will be the distant dusty past of tomorrow. But the > name of the module is not going to change anytime soon. right? The name of what module? Any statement of the form from __future__ import shiny becomes unnecessary as soon as shiny's future arrives, at which point the statement can be removed. The statement is necessary only so long as shiny *is* in the future. So the name is thoroughly appropriate. > Please call it __progress__ or __history__ or even __python_history__ > or come up with some other name. Sorry, but none of those make any sense given the intended use. It's not a part of Python 2.1 "history" that nested scopes won't be the default before 2.2! > What about __python_bloat__ ? > 
                              
                              . *That* one makes some sense. > In my experience of computing it is a really bad idea to call anything > 'new', 'old', 'future', '2000' or some such because those names last much > longer than you would have believed at the time the name was choosen. The purpose of __future__ is to supply a means to try out future incompatible extensions before they become the default. The set of future extensions will change from release to release, but that they *are* in the future remains invariant even if Python goes on until universal heat death. Given the rules I already posted, it will be very easy to write a Python tool to identify obsolete __future__ imports and remove them (if you want). From mikael at isy.liu.se Fri Feb 23 10:41:12 2001 From: mikael at isy.liu.se (Mikael Olofsson) Date: Fri, 23 Feb 2001 10:41:12 +0100 (MET) Subject: [Python-Dev] RE: Nested scopes resolution -- you can breathe again! In-Reply-To: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              On 23-Feb-01 Guido van Rossum wrote: > from __future__ import nested_scopes There really is a time machine. So I guess I can get the full Python 3k functionality by doing from __future__ import * /Mikael ----------------------------------------------------------------------- E-Mail: Mikael Olofsson 
                              
                              WWW: http://www.dtr.isy.liu.se/dtr/staff/mikael Phone: +46 - (0)13 - 28 1343 Telefax: +46 - (0)13 - 28 1339 Date: 23-Feb-01 Time: 10:39:52 /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ This message was sent by XF-Mail. ----------------------------------------------------------------------- From moshez at zadka.site.co.il Fri Feb 23 10:52:45 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Fri, 23 Feb 2001 11:52:45 +0200 (IST) Subject: [Python-Dev] RE: Nested scopes resolution -- you can breathe again! In-Reply-To: 
                              
                              References: 
                              
                              Message-ID: <20010223095245.A69E2A840@darjeeling.zadka.site.co.il> On Fri, 23 Feb 2001, Mikael Olofsson 
                              
                              wrote: > There really is a time machine. So I guess I can get the full Python 3k > functionality by doing > > from __future__ import * In Py3K from import * will be illegal, so this will unfortunately blow up the minute the "import_star_bad" is imported. You'll just have to try them one by one... -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez at debian.org |http://www.{python,debian,gnu}.org From mikael at isy.liu.se Fri Feb 23 11:21:06 2001 From: mikael at isy.liu.se (Mikael Olofsson) Date: Fri, 23 Feb 2001 11:21:06 +0100 (MET) Subject: [Python-Dev] RE: Nested scopes resolution -- you can breathe In-Reply-To: <01c301c5198d$c6bcc3f0$0900a8c0@SPIFF> Message-ID: 
                              
                              On 23-Feb-05 Fredrik Lundh wrote: > Mikael Olofsson wrote: > > from __future__ import * > > I wouldn't do that: it imports both "warnings_are_errors" and > "from_import_star_is_evil", and we've found that it's impossible > to catch ParadoxErrors in a platform independent way. Naturally. More seriously though, I like from __future__ import something as an idiom. It gives us a clear reusable syntax to incorporate new features before they are included in the standard distribution. It is not obvious to me that the proposed alternative import __something__ is a way to incorporate something new. Perhaps Py3k should allow from __past__ import something to give us a way to keep some functionality from 2.* that has been (will be) changed in Py3k. explicit-is-better-than-implicit-ly y'rs /Mikael ----------------------------------------------------------------------- E-Mail: Mikael Olofsson 
                              
                              WWW: http://www.dtr.isy.liu.se/dtr/staff/mikael Phone: +46 - (0)13 - 28 1343 Telefax: +46 - (0)13 - 28 1339 Date: 23-Feb-01 Time: 11:07:11 /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ This message was sent by XF-Mail. ----------------------------------------------------------------------- From guido at digicool.com Fri Feb 23 13:28:17 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 23 Feb 2001 07:28:17 -0500 Subject: [Python-Dev] Re: Other situations like this In-Reply-To: Your message of "Fri, 23 Feb 2001 00:30:27 EST." 
                              
                              References: 
                              
                              Message-ID: <200102231228.HAA23466@cj20424-a.reston1.va.home.com> > [Guido] > > We also believe that the magical import mechanism is useful > > enough to be reused for other situations like this; Tim will > > draft a PEP to describe in excruciating detail. > > [Jeremy Hylton] > > ... > > I'm happy to hear that Tim will draft this PEP. He didn't mention it > > at lunch today or I would have given him a big hug (or bought him a > > Coke). > > Guido's msg was the first I heard of it too. I think this is the same > process by which I got assigned to change Windows imports: the issue came > up, and I opened my mouth <-0.9 wink>. Oops. I swear I heard you offer to write it. I guess all you said was that it should be written. Oh well. Somebody will write it. :-) Looks like Tim's proposed __future__.py is in good shape already. --Guido van Rossum (home page: http://www.python.org/~guido/) From pedroni at inf.ethz.ch Fri Feb 23 13:42:11 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Fri, 23 Feb 2001 13:42:11 +0100 (MET) Subject: [Python-Dev] nested scopes: I'm glad (+excuses) Message-ID: <200102231242.NAA27564@core.inf.ethz.ch> Hi. I'm really glad that the holy war has come to an end, and that a technical solution has been found. This was my first debate here and I have said few wise things, more stupid ones and some violent or unfair: my excuses go to Jeremy, Guido and the biz mind (in some of us) that make money out of software (nobody can predict how he will make his living ;)) I'm glad that we have nested scopes, a transition syntax and path and no new keyword (no irony in the latter). Cheers, Samuele. From ping at lfw.org Fri Feb 23 14:23:42 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Fri, 23 Feb 2001 05:23:42 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: 
                              
                              Message-ID: 
                              
                              On Thu, 22 Feb 2001, Tim Peters wrote: > That determination is done at compile-time, not runtime. In the presence of > "exec" and "import *" in some contexts, compile-time determination is > stymied and there is no runtime support for a "slow" lookup. Would the existence of said runtime support hurt anybody? Don't we already do slow lookup in some situations anyway? > Note that the restrictions are *not* against lexical nesting, they're > against particular uses of "exec" and "import *" (the latter of which is so > muddy the Ref Man said it was undefined a long, long time ago). (To want to *take away* the ability to do import-* at all, in order to protect programmers from their own bad habits, is a different argument. I think we all already agree that it's bad form. But the recent clamour has shown that we can't take it away just yet.) > There are tradeoffs here among: > > conceptual clarity > runtime efficiency > implementation complexity > rate of cyclic garbage creation > > Your message favors "conceptual clarity" over all else; the implementation > doesn't. Python also limits strings to the size of a platform int <0.9 > wink>. Yes, i do think conceptual clarity is important. The way Python leans towards conceptual simplicity is a big part of its success, i believe. The less there is for someone to fit into their brain, the less time they can spend worrying about how the language will behave and the more they can focus on getting the job done. And i don't think we have to sacrifice much of the others to do it. In fact, often conceptual clarity leads to a simpler implementation, and sometimes even a faster implementation. Now i haven't actually done the implementation so i can't tell you whether it will be faster, but it seems to me that it's likely to be simpler and could stand a chance of being faster. -- ?!ng "The only `intuitive' interface is the nipple. After that, it's all learned." -- Bruce Ediger, on user interfaces From ping at lfw.org Fri Feb 23 14:15:07 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Fri, 23 Feb 2001 05:15:07 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: <14997.53815.769191.239591@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              On Thu, 22 Feb 2001, Jeremy Hylton wrote: > I can't think of another lexically scoped language that > allows an exec or eval to create a new variable binding that can later > be used via a plain-old reference. I tried STk Scheme, guile, and elisp, and they all do this. > One of the reasons I am strongly in favor of making import * and exec > errors is that it stymies the efforts of a reader to understand the > code. Yes, i look forward to the day when no one will ever use import-* any more. I can see good reasons to discourage the use of import-* and bare-exec in general anywhere. But as long as they *do* have a meaning, they had better mean the same thing at the top level as internally. > If we look at your examples: > >>> def f(): # x = 3 inside, no g() [...] > >>> def f(): # x = 3 inside, nested g() [...] > >>> def f(): # x = 3 outside, nested g() > > In these examples, it isn't at all obvious to the reader of the code > whether the module foo contains a binding for x or whether the > programmer intended to import that name and stomp on the exist > definition. It's perfectly clear -- since we expect the reader to understand what happens when we do exactly the same thing at the top level. > Another key difference between Scheme and Python is that in Scheme, > each binding operation creates a new scope. Scheme separates 'define' and 'set!', while Python only has '='. In Scheme, multiple defines rebind variables: (define a 1) (define a 2) (define a 3) just as in Python, multiple assignments rebind variables: a = 1 a = 2 a = 3 The lack of 'set!' prevents Python from rebinding variables outside of the local scope, but it doesn't prevent Python from being otherwise consistent and having "a = 2" do the same thing inside or outside of a function: it binds a name in the current scope. -- ?!ng "The only `intuitive' interface is the nipple. After that, it's all learned." -- Bruce Ediger, on user interfaces From ping at lfw.org Fri Feb 23 12:51:19 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Fri, 23 Feb 2001 03:51:19 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: <200102230336.WAA21493@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              On Thu, 22 Feb 2001, Guido van Rossum wrote: > > 1. "exec('x = 1')" should behave exactly the same as "x = 1" > > Sorry, no go. This just isn't a useful feature. It's not a "feature" as in "something to be added to the language". It's a consistent definition of "exec" that simplifies understanding. Without it, how do you explain what "exec" does? > > 2. "from foo import *" should do the same as "x = 1" > > But it is limiting because it hides information from the compiler, and > hence it is outlawed when it gets in the way of the compiler. Again, consistency simplifies understanding. What it "gets in the way of" is a particular optimization; it doesn't make compilation impossible. The language reference says that import binds a name in the local namespace. That means "import x" has to do the same thing as "x = 1" for some value of 1. "from foo import *" binds several names in the local scope, and so if x is bound in module foo, it should do the same thing as "x = 1" for some value of 1. When "from foo import *" makes it impossible to know at compile-time what bindings will be added to the current scope, we just do normal name lookup for that scope. No big deal. It already works that way at module scope; why should this be any different? With this simplification, there can be a single scope chain: builtins <- module <- function <- nested-function <- ... and all scopes can be treated the same. The implementation could probably be both simpler and faster! Simpler, because we don't have to have separate cases for builtins, local, and global; and faster, because some of the optimizations we currently do for locals could be made to apply at all levels. Imagine "fast globals"! And imagine getting them essentially for free. > > 3. "def g(): print x" should behave the same as "print x" > > Huh? again. Defining a function does't call it. Duh, obviously i meant 3. "def g(): print x" immediately followed by "g()" should behave the same as "print x" Do you agree with this principle, at least? > Python has always > adhered to the principle that the context where a function is defined > determines its context, not where it is called. Absolutely agreed. I've never intended to contradict this. This is the foundation of lexical scoping. -- ?!ng "Don't worry about people stealing an idea. If it's original, you'll have to jam it down their throats." -- Howard Aiken From ping at lfw.org Fri Feb 23 13:32:59 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Fri, 23 Feb 2001 04:32:59 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: <200102230331.WAA21467@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              On Thu, 22 Feb 2001, Guido van Rossum wrote: > Note that this is moot now -- see my previous post about how we've > decided to resolve this using a magical import to enable nested scopes > (in 2.1). Yes, yes. It seems like a good answer for now -- indeed, some sort of mechanism for selecting compilation options has been requested before. But we still need to eventually have a coherent answer. The chart in my other message doesn't look coherent to me -- it would take too long to explain all of the cases to someone. I deserve a smack on the head for my confusion at seeing 'x' printed out -- that happens to be the value of the NameError in 1.5.2. Here is an updated chart (updated test script is attached): 1.5.2 2.1a2 suggested toplevel with print x from foo import * 3 1 3 1 3 1 exec('x = 1') 3 1 3 1 3 1 x = 1 3 1 3 1 3 1 with g() from foo import * 3 1 3 1 3 1 exec('x = 1') 3 1 3 1 3 1 x = 1 3 1 3 1 3 1 x = 3 outside f() with print x from foo import * 3 1 3 1 3 1 exec('x = 1') 3 1 3 1 3 1 x = 1 NameError UnboundLocal 3 1 with g() from foo import * 3 3 SyntaxError 3 1 exec('x = 1') 3 3 SyntaxError 3 1 x = 1 NameError UnboundLocal 3 1 x = 3 inside f() with print x from foo import * 3 1 3 1 3 1 exec('x = 1') 3 1 3 1 3 1 x = 1 3 1 3 1 3 1 with g() from foo import * NameError SyntaxError 3 1 exec('x = 1') NameError SyntaxError 3 1 x = 1 NameError 3 1 3 1 You can see that the situation in 1.5.2 is pretty messy -- and it's precisely the inconsistent cases that have historically caused confusion. 2.1a2 is better but it still has exceptional cases -- just the cases people seem to be complaining about now. > > There is something missing from my understanding here: > > > > - The model is, each environment has a pointer to the > > enclosing environment, right? > > Actually, no. I'm talking about the model, not the implementation. I'm advocating that we think *first* about what the programmer (the Python user) has to worry about. I think that's a Pythonic perspective, isn't it? Or are you really saying that this isn't even the model that the user should be thinking about? > > - Whenever you can't find what you're looking for, you > > go up to the next level and keep looking, right? > > That depends. Our model is inspired by the semantics of locals in > Python 2.0 and before, and this all happens at compile time. Well, can we nail down what you mean by "depends"? What reasoning process should the Python programmer go through to predict the behaviour of a given program? > In particular: > > x = 1 > def f(): > print x > x = 2 > > raises an UnboundLocalError error at the point of the print I've been getting the impression that people consider this a language wart (or at least a little unfortunate, as it tends to confuse people). It's a frequently asked question, and when i've had to explain it to people they usually grumble. As others have pointed out, it can be pretty surprising when the assignment happens much later in the body. I think if you asked most people what this would do, they would expect 1. Why? Because they think about programming in terms of some simple invariants, e.g.: - Editing part of a block doesn't affect the behaviour of the block up to the point where you made the change. - When you move some code into a function and then call the function, that code still works the same. This kind of backwards-action-at-a-distance breaks the first invariant. Lexical scoping is good largely because it helps preserve the second invariant (the function carries the context of where it was defined). And so on. > No need to go to the source -- this is all clearly explained in the > PEP (http://python.sourceforge.net/peps/pep-0227.html). It seems not to be that simple, because i was unable to predict what situations would be problematic without understanding how the optimizations are implemented. * * * > > 5. The current scope is determined by the nearest enclosing 'def'. > > For most purposes, 'class' also creates a scope. Sorry, i should have written: 5. The parent scope is determined by the nearest enclosing 'def'. * * * > > Given this model, all of the scoping questions that have been > > raised have completely clear answers: > > > > Example I [...] > > Example II > You didn't try this, did you? [...] > > Example III > Wrong again. [...] > > Example IV > I didn't try this one, but I'm sure that it prints 3 in 2.1a1 and > raises the same SyntaxError as above with the current CVS version. I know that. I introduced these examples with "given this model..." to indicate that i'm describing what the "completely clear answers" are. The chart above tries to summarize all of the current behaviour. > > But the ability of the compiler to make this optimization should only > > affect performance, not affect the Python language model. > > Too late. The semantics have been bent since 1.0 or before. I think it's better to try to bend them as little as possible -- and if it's possible to unbend them to make the language easier to understand, all the better. Since we're changing the behaviour now, this is a good opportunity to make sure the model is simple. > > The model described above [...] > > exactly reflects the environment-diagram model of scoping > > as taught to most Scheme students and i would argue that it is the > > easiest to explain. > > I don't know Scheme, but isn't it supposed to be a compiled language? That's not the point. There is a scoping model that is straightforward and easy to understand, and regardless of whether the implementation is interpreted or compiled, you can easily predict what a given piece of code is going to do. > I'm not sure how you can say that Scheme sidesteps the issue when you > just quote an example where Scheme implementations differ? That's what i'm saying. The standard sidesteps (i.e. doesn't specify how to handle) the issue, so the implementations differ. I don't think we have the option of avoiding the issue; we should have a clear position on it. (And that position should be as simple to explain as we can make it.) > I see that Tim posted another rebuttal, explaining better than I do > here *why* Ping's "simple" model is not good for Python, so I'll stop > now. Let's get a complete specification of the model then. And can i ask you to clarify your position: did you put quotation marks around "simpler" because you disagree that the model i suggest is simpler and easier to understand; or did you agree that it was simpler but felt it was worth compromising that simplicity for other benefits? And if the latter, are the other benefits purely about enabling optimizations in the implementation, or other things as well? Thanks, -- ?!ng -------------- next part -------------- import sys file = open('foo.py', 'w') file.write('x = 1') file.close() toplevel = """ x = 3 print x, %s %s %s """ outside = """ x = 3 def f(): print x, %s %s %s f() """ inside = """ def f(): x = 3 print x, %s %s %s f() """ for template in [toplevel, outside, inside]: for print1, print2 in [('print x', ''), ('def g(): print x', 'g()')]: for statement in ['from foo import *', 'exec("x = 1")', 'x = 1']: code = template % (statement, print1, print2) # print code try: exec code in {} except: print sys.exc_type, sys.exc_value print From guido at digicool.com Fri Feb 23 14:58:59 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 23 Feb 2001 08:58:59 -0500 Subject: [Python-Dev] nested scopes: I'm glad (+excuses) In-Reply-To: Your message of "Fri, 23 Feb 2001 13:42:11 +0100." <200102231242.NAA27564@core.inf.ethz.ch> References: <200102231242.NAA27564@core.inf.ethz.ch> Message-ID: <200102231358.IAA23816@cj20424-a.reston1.va.home.com> > Hi. > > I'm really glad that the holy war has come to an end, and that a technical > solution has been found. Not as glad as I am, Samuele! > This was my first debate here and I have said few wise things, more stupid > ones and some violent or unfair: my excuses go to Jeremy, Guido > and the biz mind (in some of us) that make money out of software > (nobody can predict how he will make his living ;)) It wasn't my first debate (:-), but I feel the same way! > I'm glad that we have nested scopes, a transition syntax and path > and no new keyword (no irony in the latter). Me too. > Cheers, Samuele. Hope to hear from you more, Samuele! How's the Jython port of nested scopes coming? --Guido van Rossum (home page: http://www.python.org/~guido/) From nas at arctrix.com Fri Feb 23 15:36:51 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Fri, 23 Feb 2001 06:36:51 -0800 Subject: [Python-Dev] Nested scopes resolution -- you can breathe again! In-Reply-To: <200102230259.VAA19238@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Feb 22, 2001 at 09:59:26PM -0500 References: 
                              
                              <200102230259.VAA19238@cj20424-a.reston1.va.home.com> Message-ID: <20010223063651.B23270@glacier.fnational.com> On Thu, Feb 22, 2001 at 09:59:26PM -0500, Guido van Rossum wrote: > from __future__ import nested_scopes I this this alternative better since there is only one "reserved" module name. I still think releasing 2.0.1 with warnings is a good idea. OTOH, maybe its hard for that compiler to detect questionable code. Neil From guido at digicool.com Fri Feb 23 15:42:12 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 23 Feb 2001 09:42:12 -0500 Subject: [Python-Dev] Nested scopes resolution -- you can breathe again! In-Reply-To: Your message of "Fri, 23 Feb 2001 06:36:51 PST." <20010223063651.B23270@glacier.fnational.com> References: 
                              
                              <200102230259.VAA19238@cj20424-a.reston1.va.home.com> <20010223063651.B23270@glacier.fnational.com> Message-ID: <200102231442.JAA24227@cj20424-a.reston1.va.home.com> > > from __future__ import nested_scopes > > I this this alternative better since there is only one "reserved" > module name. Noted. > I still think releasing 2.0.1 with warnings is a > good idea. OTOH, maybe its hard for that compiler to detect > questionable code. The problem is that in order to do a decent job of compile-time warnings, not only the warnings module and API would have to be retrofitted in 2.0.1, but also the entire new compiler, which has the symbol table needed to be able to detect the situations we want to warn about. --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at digicool.com Fri Feb 23 16:01:43 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 23 Feb 2001 10:01:43 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) References: <14997.54747.56767.641188@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              Message-ID: <14998.31575.97664.422182@anthem.wooz.org> Excellent, Tim! Let's PEP this sucker. The only suggestion I was going to make was to use sys.hexversion instead of sys.version_info. Something about tuples-of-tuples kind of bugged me. But after composing the response to suggest this, I looked at it closely, and decided that sys.version_info is right after all. Both are equally comparable and sys.version_info is more "human friendly". -Barry From thomas at xs4all.net Fri Feb 23 16:04:47 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 23 Feb 2001 16:04:47 +0100 Subject: [Python-Dev] nested scopes: I'm glad (+excuses) In-Reply-To: <200102231242.NAA27564@core.inf.ethz.ch>; from pedroni@inf.ethz.ch on Fri, Feb 23, 2001 at 01:42:11PM +0100 References: <200102231242.NAA27564@core.inf.ethz.ch> Message-ID: <20010223160447.A16781@xs4all.nl> On Fri, Feb 23, 2001 at 01:42:11PM +0100, Samuele Pedroni wrote: > I'm really glad that the holy war has come to an end, and that a technical > solution has been found. Same here. I really like the suggested solution, just to show that I'm not adverse to progress per se ;) I also apologize for not thinking up something similar, despite thinking long and hard (not to mention posting long and especially hard ;) on the issue. I'll have to buy you all beer (or cola, or hard liquor, whatever's your poison) next week ;-) -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From jeremy at alum.mit.edu Fri Feb 23 16:41:47 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 10:41:47 -0500 (EST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: 
                              
                              References: <14997.53815.769191.239591@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              Message-ID: <14998.33979.566557.956297@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "KPY" == Ka-Ping Yee 
                              
                              writes: KPY> On Thu, 22 Feb 2001, Jeremy Hylton wrote: >> I can't think of another lexically scoped language that allows an >> exec or eval to create a new variable binding that can later be >> used via a plain-old reference. KPY> I tried STk Scheme, guile, and elisp, and they all do this. I guess I'm just dense then. Can you show me an example? The only way to introduce a new name in Scheme is to use lambda or define which can always be translated into an equivalent letrec. The name binding is then visible only inside the body of the lambda. As a result, I don't see how eval can introduce a new name into a scope. The Python example I was thinking of is: def f(): exec "y=2" return y >>> f() 2 What would the Scheme equivalent be? The closest analog I can think of is (define (f) (eval "(define y 2)") y) The result here is undefined because y is not bound in the body of f, regardless of the eval. Jeremy From jeremy at alum.mit.edu Fri Feb 23 16:59:24 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 10:59:24 -0500 (EST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: 
                              
                              References: <14997.53815.769191.239591@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              Message-ID: <14998.35036.311805.899392@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "KPY" == Ka-Ping Yee 
                              
                              writes: >> Another key difference between Scheme and Python is that in >> Scheme, each binding operation creates a new scope. KPY> Scheme separates 'define' and 'set!', while Python only has KPY> '='. In Scheme, multiple defines rebind variables: Really, scheme provides lambda, the let family, define, and set!, where "define" is defined in terms of letrec except at the top level. KPY> (define a 1) KPY> (define a 2) KPY> (define a 3) Scheme distinguishes between top-level definitions and internal defintions. They have different semantics. Since we're talking about what happens inside Python functions, we should only look at what define does for internal definitions. An internal defintion is only allowed at the beginning of a body, so you're example above is equivalent to: (letrec ((a 1) (a 2) (a 3)) ...) But it is an error to have duplicate name bindings in a letrec. At least it is in MzScheme. Not sure what R5RS says about this. KPY> just as in Python, multiple assignments rebind variables: KPY> a = 1 KPY> a = 2 KPY> a = 3 Python's assignment is closer to set!, since it can occur anywhere in a body not just at the beginning. But if we say that = is equivalent to set! we've got a problem, because you can't use set! on an unbound variable. I think that leaves us with two alternatives. As I mentioned in my previous message, one is to think about each assignment in Python introducing a new scope. a = 1 (let ((a 1)) a = 2 (let ((a 2)) a = 3 (let ((a 3)) ....))) or def f(): (define (f) print a (print a) a = 2 (let ((a 2)) ...)) But I don't think it's clear to read a group of equally indented statements as a series of successively nested scopes. The other alternative is to say that = is closer to set! and that the original name binding is implicit. That is: "If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the local namespace." (ref manual, sec. 4) KPY> The lack of 'set!' prevents Python from rebinding variables KPY> outside of the local scope, but it doesn't prevent Python from KPY> being otherwise consistent and having "a = 2" do the same thing KPY> inside or outside of a function: it binds a name in the current KPY> scope. Again, if we look at Scheme as an example and compare = and define, define behaves differently at the top-level than it does inside a lambda. Jeremy From akuchlin at mems-exchange.org Fri Feb 23 17:01:41 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Fri, 23 Feb 2001 11:01:41 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <20010222191450.B15506@thyrsus.com>; from esr@thyrsus.com on Thu, Feb 22, 2001 at 07:14:50PM -0500 References: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> <200102222326.SAA18443@cj20424-a.reston1.va.home.com> <20010222191450.B15506@thyrsus.com> Message-ID: <20010223110141.D2879@ute.cnri.reston.va.us> On Thu, Feb 22, 2001 at 07:14:50PM -0500, Eric S. Raymond wrote: >practice than it is in theory. In fact, Python has rather forced me >to question whether "No separation between code and data" was as >important a component of LISP's supernal wonderfulness as I believed >when I was a fully fervent member of the cult. I think it is. Currently I'm reading Steven Tanimoto's introductory AI book in a doomed-from-the-start attempt to learn about rule-based systems, and along the way am thinking about how I'd do similar tasks in Python. The problem is that, for applying pattern matching to data structures, Python has no good equivalent of Lisp's (pattern-match data '((? X) 1 2)). [1] Perhaps this is more a benefit of Lisp's simple syntax than the "no separation between code and data" priniciple. In Python you could write some sort of specialized parser, of course, but that's really a distraction from the primary AI task of writing a really bitchin' Eliza program (or whatever). --amk [1] Which would match any list whose 2nd and 3rd elements are (1 2), and bind the first element to X somehow. From jeremy at alum.mit.edu Fri Feb 23 17:09:23 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 11:09:23 -0500 (EST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: 
                              
                              References: <200102230331.WAA21467@cj20424-a.reston1.va.home.com> 
                              
                              Message-ID: <14998.35635.32450.338318@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "KPY" == Ka-Ping Yee 
                              
                              writes: >> No need to go to the source -- this is all clearly explained in >> the PEP (http://python.sourceforge.net/peps/pep-0227.html). KPY> It seems not to be that simple, because i was unable to predict KPY> what situations would be problematic without understanding how KPY> the optimizations are implemented. The problematic cases are exactly those where name bindings are introduced implicitly, i.e. cases where an operation binds a name without the name appearing the program text for that operation. That doesn't sound like an implementation-dependent defintion. [...] KPY> That's not the point. There is a scoping model that is KPY> straightforward and easy to understand, and regardless of KPY> whether the implementation is interpreted or compiled, you can KPY> easily predict what a given piece of code is going to do. [Taking you a little out of context:] This is just what I'm advocating for import * and exec in the presence of nested fucntions. There is no easy way to predict what a piece of code is going to do without (a) knowing what names a module defines or (b) figuring out what values the argument to exec will have. On the subject of easy prediction, what should the following code do according to your model: x = 2 def f(y): ... if y > 3: x = x - 1 ... print x ... x = 3 ... I think the meaning of print x should be statically determined. That is, the programmer should be able to determine the binding environment in which x will be resolved (for print x) by inspection of the code. Jeremy From tim.one at home.com Fri Feb 23 17:34:58 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 11:34:58 -0500 Subject: [Python-Dev] RE: Nested scopes resolution -- you can breathe In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Mikael Olofsson] > Naturally. More seriously though, I like > > from __future__ import something > > as an idiom. It gives us a clear reusable syntax to incorporate new > features before they are included in the standard distribution. It is > not obvious to me that the proposed alternative > > import __something__ > > is a way to incorporate something new. Bingo. That's why I'm pushing for it. Also means we only have to create one artificial module (__future__.py) for this; and besides the doc value, it occurs to me we *do* have to create a real module anyway so that masses of tools don't get confused searching for things that look like modules but don't actually exist. > Perhaps Py3k should allow > > from __past__ import something > > to give us a way to keep some functionality from 2.* that has been > (will be) changed in Py3k. Actually, I thought that's something PythonWare could implement as an extension, to seize the market opportunity created by mean old Guido breaking all the code he can on a whim 
                              
                              . Except they'll probably have to extend the syntax a bit, to make that from __past__ import not something Maybe we should add from __future__ import __past__with_not now to make that easier for them. > explicit-is-better-than-implicit-ly y'rs otoh-implicit-manages-to-hide-explicit-suckiness-a-bit-longer-ly y'rs - tim From thomas.heller at ion-tof.com Fri Feb 23 17:36:44 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Fri, 23 Feb 2001 17:36:44 +0100 Subject: [Python-Dev] distutils, uninstaller Message-ID: <03f201c09db6$cf201990$e000a8c0@thomasnotebook> I've uploaded the bdist_wininst uninstaller patch to sourceforge: http://sourceforge.net/patch/?func=detailpatch&patch_id=103948&group_id=5470 Just in case someone cares. Another thing: Shouldn't the distutils version number change before the beta? I suggest going from 1.0.1 to 1.0.2. Thomas Heller From tim.one at home.com Fri Feb 23 17:44:36 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 11:44:36 -0500 Subject: [Python-Dev] RE: Other situations like this In-Reply-To: <200102231228.HAA23466@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Guido] > Oops. I swear I heard you offer to write it. I guess all you said > was that it should be written. Oh well. Somebody will write it. :-) Na, I'll write it! I didn't volunteer, but since I've already thought about it more than anyone on Earth, I'm the natural vic^H^H^Hauthor. cementing-my-monopoly-on-retroactive-peps-ly y'rs - tim From tim.one at home.com Fri Feb 23 20:36:04 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 14:36:04 -0500 Subject: [Python-Dev] test_builtin failing on Windows Message-ID: 
                              
                              But only if run under a debug build *and* passing -O to Python: > python_d -O ../lib/test/test_builtin.py Adding parser accelerators ... Done. 4. Built-in functions test_b1 __import__ abs apply callable chr cmp coerce compile complex delattr dir divmod eval execfile filter float getattr hasattr hash hex id int isinstance issubclass len long map max min test_b2 and here it blows up with some kind of memory error. Other systems? From barry at digicool.com Fri Feb 23 20:45:43 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 23 Feb 2001 14:45:43 -0500 Subject: [Python-Dev] test_builtin failing on Windows References: 
                              
                              Message-ID: <14998.48615.952027.397301@anthem.wooz.org> >>>>> "TP" == Tim Peters 
                              
                              writes: TP> But only if run under a debug build *and* passing -O to TP> Python: I'm currently running the regrtest under insure but only on Linux and w/o -O. -Barry From tim.one at home.com Fri Feb 23 20:58:16 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 14:58:16 -0500 Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: 
                              
                              Message-ID: 
                              
                              > But only if run under a debug build *and* passing -O to Python: *And* only if the .pyc/.pyo files reachable from Lib/ are deleted before running it. Starting to smell like another of those wild memory overwrite problems for efence/Insure or whatever. From tim.one at home.com Fri Feb 23 21:25:25 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 15:25:25 -0500 Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: 
                              
                              Message-ID: 
                              
                              > But only if run under a debug build *and* passing -O to Python: > > *And* only if the .pyc/.pyo files reachable from Lib/ are deleted > before running it. The explosion is here: static int com_make_closure(struct compiling *c, PyCodeObject *co) { int i, free = PyTuple_GET_SIZE(co->co_freevars); co-> is almost entirely filled with 0xdddddddd at this point (and in particular, that's the value of co->co_freevars, which is why it blows up). That bit pattern is the MS "dead landfill" value: when the MS debug libraries free() an object, they overwrite the space with 0xdd bytes. Here's the call stack: com_make_closure(compiling * 0x0063f5c4, PyCodeObject * 0x00a1b5b0) line 2108 + 6 bytes com_test(compiling * 0x0063f5c4, _node * 0x008470d0) line 2164 + 13 bytes com_node(compiling * 0x0063f5c4, _node * 0x008470d0 line 3452 + 13 bytes com_argument(compiling * 0x0063f5c4, _node * 0x0084a900, _object * * 0x0063f3b8) line 1516 + 16 bytes com_call_function(compiling * 0x0063f5c4, _node * 0x00847124) line 1581 + 17 bytes com_apply_trailer(compiling * 0x0063f5c4, _node * 0x008471d4) line 1764 + 19 bytes com_power(compiling * 0x0063f5c4, _node * 0x008472b0) line 1792 + 24 bytes com_factor(compiling * 0x0063f5c4, _node * 0x008472f0) line 1813 + 16 bytes com_term(compiling * 0x0063f5c4, _node * 0x00847330) line 1823 + 16 bytes com_arith_expr(compiling * 0x0063f5c4, _node * 0x00847370) line 1852 + 16 bytes com_shift_expr(compiling * 0x0063f5c4, _node * 0x008473b0) line 1878 + 16 bytes com_and_expr(compiling * 0x0063f5c4, _node * 0x008473f0) line 1904 + 16 bytes com_xor_expr(compiling * 0x0063f5c4, _node * 0x00847430) line 1926 + 16 bytes com_expr(compiling * 0x0063f5c4, _node * 0x0084a480) line 1948 + 16 bytes com_comparison(compiling * 0x0063f5c4, _node * 0x008474b0) line 2002 + 16 bytes com_not_test(compiling * 0x0063f5c4, _node * 0x008474f0) line 2077 + 16 bytes com_and_test(compiling * 0x0063f5c4, _node * 0x008475e0) line 2094 + 24 bytes com_test(compiling * 0x0063f5c4, _node * 0x0084b124) line 2178 + 24 bytes com_node(compiling * 0x0063f5c4, _node * 0x0084b124) line 3452 + 13 bytes com_if_stmt(compiling * 0x0063f5c4, _node * 0x00847620) line 2817 + 13 bytes com_node(compiling * 0x0063f5c4, _node * 0x00847620) line 3431 + 13 bytes com_file_input(compiling * 0x0063f5c4, _node * 0x007d4cc0) line 3660 + 13 bytes compile_node(compiling * 0x0063f5c4, _node * 0x007d4cc0) line 3762 + 13 bytes jcompile(_node * 0x007d4cc0, char * 0x0063f84c, compiling * 0x00000000) line 3870 + 16 bytes PyNode_Compile(_node * 0x007d4cc0, char * 0x0063f84c) line 3813 + 15 bytes parse_source_module(char * 0x0063f84c, _iobuf * 0x10261888) line 611 + 13 bytes load_source_module(char * 0x0063f9a8, char * 0x0063f84c, _iobuf * 0x10261888) line 731 + 13 bytes load_module(char * 0x0063f9a8, _iobuf * 0x10261888, char * 0x0063f84c, int 0x00000001) line 1259 + 17 bytes import_submodule(_object * 0x1e1f6ca0 __Py_NoneStruct, char * 0x0063f9a8, char * 0x0063f9a8) line 1787 + 33 bytes load_next(_object * 0x1e1f6ca0 __Py_NoneStruct, _object * 0x1e1f6ca0 __Py_NoneStruct, char * * 0x0063fabc, char * 0x0063f9a8, int * 0x0063f9a4) line 1643 + 17 bytes import_module_ex(char * 0x00000000, _object * 0x00770d6c, _object * 0x00770d6c, _object * 0x1e1f6ca0 __Py_NoneStruct) line 1494 + 35 bytes PyImport_ImportModuleEx(char * 0x007ae58c, _object * 0x00770d6c, _object * 0x00770d6c, _object * 0x1e1f6ca0 __Py_NoneStruct) line 1535 + 21 bytes builtin___import__(_object * 0x00000000, _object * 0x007716ac) line 31 + 21 bytes call_cfunction(_object * 0x00760080, _object * 0x007716ac, _object * 0x00000000) line 2740 + 11 bytes call_object(_object * 0x00760080, _object * 0x007716ac, _object * 0x00000000) line 2703 + 17 bytes PyEval_CallObjectWithKeywords(_object * 0x00760080, _object * 0x007716ac, _object * 0x00000000) line 2673 + 17 bytes eval_code2(PyCodeObject * 0x007afe10, _object * 0x00770d6c, _object * 0x00770d6c, _object * * 0x00000000, int 0x00000000, _object * * 0x00000000, int 0x00000000, _object * * 0x00000000, int 0x00000000, _object * 0x00000000) line 1767 + 15 bytes PyEval_EvalCode(PyCodeObject * 0x007afe10, _object * 0x00770d6c, _object * 0x00770d6c) line 341 + 31 bytes run_node(_node * 0x007a8760, char * 0x00760dd0, _object * 0x00770d6c, _object * 0x00770d6c) line 935 + 17 bytes run_err_node(_node * 0x007a8760, char * 0x00760dd0, _object * 0x00770d6c, _object * 0x00770d6c) line 923 + 21 bytes PyRun_FileEx(_iobuf * 0x10261888, char * 0x00760dd0, int 0x00000101, _object * 0x00770d6c, _object * 0x00770d6c, int 0x00000001) line 915 + 21 bytes PyRun_SimpleFileEx(_iobuf * 0x10261888, char * 0x00760dd0, int 0x00000001) line 628 + 30 bytes PyRun_AnyFileEx(_iobuf * 0x10261888, char * 0x00760dd0, int 0x00000001) line 467 + 17 bytes Py_Main(int 0x00000003, char * * 0x00760d90) line 296 + 44 bytes main(int 0x00000003, char * * 0x00760d90) line 10 + 13 bytes mainCRTStartup() line 338 + 17 bytes Unsurprisingly, it's importing test_b2.py at this point. So this is enough to reproduce the problem: First, make sure test_b2.pyo doesn't exist. Then > python_d -O Adding parser accelerators ... Done. Python 2.1a2 (#10, Feb 23 2001, 14:19:33) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path.insert(0, "../lib/test") [5223 refs] >>> import test_b2 Boom. Best guess is that I need a debug build to fail, because in the normal build it's still referencing free()d memory anyway, but the normal MS malloc/free don't overwrite free()d memory with trash (so the problem isn't noticed). No guess as to why -O is needed. From fdrake at acm.org Fri Feb 23 21:49:08 2001 From: fdrake at acm.org (Fred L. Drake) Date: Fri, 23 Feb 2001 15:49:08 -0500 Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: 
                              
                              Message-ID: 
                              
                              "Tim Peters" 
                              
                              wrote: > Unsurprisingly, it's importing test_b2.py at this point. > So this is enough to reproduce the problem: ... > Best guess is that I need a debug build to fail, because > in the normal build > it's still referencing free()d memory anyway, but the > normal MS malloc/free > don't overwrite free()d memory with trash (so the > problem isn't noticed). > No guess as to why -O is needed. This sounds like there's a difference in when someting gets DECREFed differently when the optimizations are performed; perhaps that code hasn't kept up with the pace of change? I'm not familiar enough with that code to be able to check it quickly with any level of confidence, however. -Fred -- Fred L. Drake, Jr. 
                              
                              PythonLabs at Digital Creations From tim.one at home.com Fri Feb 23 21:49:17 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 15:49:17 -0500 Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: 
                              
                              Message-ID: 
                              
                              The second time we get to here (in com_test, compile.c, and when running python_d -O blah/blah/test_builtin.py, and test_b2.pyo doesn't exist): co = (PyObject *) icompile(CHILD(n, 0), c); if (co == NULL) { c->c_errors++; return; } symtable_exit_scope(c->c_symtable); if (co == NULL) { c->c_errors++; i = 255; closure = 0; } else { i = com_addconst(c, co); Py_DECREF(co); ************** HERE ********* closure = com_make_closure(c, (PyCodeObject *)co); } the refcount of co is 1 before we do the Py_DECREF. Everything else follows from that. In the failing 2nd time thru this code, com_addconst finds the thing already, so com_addconst doesn't boost the refcount above 1. The code appears a bit confused regardless (e.g., it checks for co==NULL twice, but it looks impossible for the second test to succeed). From jeremy at alum.mit.edu Fri Feb 23 21:47:57 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 15:47:57 -0500 (EST) Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: 
                              
                              References: 
                              
                              
                              Message-ID: <14998.52349.936778.169519@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TP" == Tim Peters 
                              
                              writes: >> But only if run under a debug build *and* passing -O to Python: >> >> *And* only if the .pyc/.pyo files reachable from Lib/ are deleted >> before running it. I do not see a problem running a debug build with -O on Linux. Is it possible that this build does not contain the updates to compile.c *and* symtable.c that were checked in this morning? The problem you are describing sounds a little like the error I had before the symtable.c patch (which added in an INCREF) -- except that I was seeing the error with all the time. Jeremy From jeremy at alum.mit.edu Fri Feb 23 21:52:49 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 15:52:49 -0500 (EST) Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: 
                              
                              References: 
                              
                              
                              Message-ID: <14998.52641.104080.334453@w221.z064000254.bwi-md.dsl.cnc.net> Yeah. The code is obviously broken. The second co==NULL test should go and the DECREF should be below the com_make_closure() call. Do you want to fix it or should I? Jeremy From tim.one at home.com Fri Feb 23 22:44:13 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 16:44:13 -0500 Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: <14998.52641.104080.334453@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              [Jeremy] > Yeah. The code is obviously broken. The second co==NULL test should > go and the DECREF should be below the com_make_closure() call. Do you > want to fix it or should I? I'll do it: a crash isn't easy to provoke without the MS debug landfill behavior, so it's easiest for me to test it. all's-well-that-ends-ly y'rs - tim From thomas at xs4all.net Fri Feb 23 22:46:26 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 23 Feb 2001 22:46:26 +0100 Subject: [Python-Dev] OS2 support ? Message-ID: <20010223224626.C16781@xs4all.nl> Is OS2 still supported at all ? I noticed this, in PC/os2vacpp/config.h: /* Provide a default library so writers of extension modules * won't have to explicitly specify it anymore */ #pragma library("Python15.lib") -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Fri Feb 23 22:56:05 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 16:56:05 -0500 Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: <14998.35635.32450.338318@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              I hate to be repetitive 
                              
                              , but forget Scheme! Scheme has nothing like "import *" or Python's flavor of eval/exec. The only guidance we'll get there is that the Scheme designers were so put off by mixing lexical scoping with eval that even *referencing* non-toplevel vars inside eval's argument isn't supported. hmm-on-second-thought-let's-pay-a-lot-of-attention-to-scheme<0.6-wink>-ly y'rs - tim From guido at digicool.com Fri Feb 23 23:08:22 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 23 Feb 2001 17:08:22 -0500 Subject: [Python-Dev] OS2 support ? In-Reply-To: Your message of "Fri, 23 Feb 2001 22:46:26 +0100." <20010223224626.C16781@xs4all.nl> References: <20010223224626.C16781@xs4all.nl> Message-ID: <200102232208.RAA32475@cj20424-a.reston1.va.home.com> > Is OS2 still supported at all ? Good question. Does anybody still care about OS/2? There's a Python for OS/2 homepage here: http://warped.cswnet.com/~jrush/python_os2/index.html but it is still at 1.5.2. I don't know of that was built with the sources in PC/os2vacpp/... Maybe you can ask Jeff Rush? --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Fri Feb 23 23:18:26 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 17:18:26 -0500 Subject: [Python-Dev] OS2 support ? In-Reply-To: <20010223224626.C16781@xs4all.nl> Message-ID: 
                              
                              [Thomas Wouters] > Is OS2 still supported at all ? Not by me, and, AFAIK, not by anyone else either. Looks like nobody touched it in 2 1/2 years, and a "Jeff Rush" is the only one who ever did. From jeremy at alum.mit.edu Fri Feb 23 23:30:11 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 17:30:11 -0500 (EST) Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: 
                              
                              References: <14997.54747.56767.641188@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              Message-ID: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> Couple of issues the come to mind about __future__: 1 Should this work? if x: from __future__ import nested_scopes I presume not, but the sketch of the rules you posted earlier presumably allow it. 2. How should the interactive interpreter be handled? I presume if you type >>> from __future__ import nested_scopes That everything thereafter will be compiled with nested scopes. This ends up being a little tricky, because the interpreter has to hang onto this information and tell the compiler about it. Jeremy From tim.one at home.com Fri Feb 23 23:56:39 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 17:56:39 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              [Jeremy] > 1 Should this work? > > if x: > from __future__ import nested_scopes > > I presume not, but the sketch of the rules you posted earlier > presumably allow it. You have to learn to think more like tabnanny: "module scope" obviously means "indent level 0" if you're obsessed with whitespace 
                              
                              . > 2. How should the interactive interpreter be handled? You're kidding. I thought we agreed to drop the interactive interpreter for 2.1? (Let's *really* give 'em something to carp about ...) > I presume if you type > >>> from __future__ import nested_scopes > > That everything thereafter will be compiled with nested scopes. That's my guess too, of course. > This ends up being a little tricky, because the interpreter has to > hang onto this information and tell the compiler about it. Ditto for python -i some_script.py where some_script.py contains a magical import. OTOH, does exec-compiled (or execfile-ed) code start with a clean slate, or inherent the setting of the module from which it's exec[file]'ed? I think the latter has to be true. Could get messy, so it's a good thing we've got several whole days to work out the kinks ... business-as-usual-ly y'rs - tim From jeremy at alum.mit.edu Sat Feb 24 00:00:59 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 18:00:59 -0500 (EST) Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: 
                              
                              References: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              Message-ID: <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TP" == Tim Peters 
                              
                              writes: TP> [Jeremy] >> 1 Should this work? >> >> if x: from __future__ import nested_scopes >> >> I presume not, but the sketch of the rules you posted earlier >> presumably allow it. TP> You have to learn to think more like tabnanny: "module scope" TP> obviously means "indent level 0" if you're obsessed with TP> whitespace 
                              
                              . Hmmmm... I'm not yet sure how to deduce indent level 0 inside the parser. Were we going to allow? try: from __future__ import curly_braces except ImportError: ... Jeremy From pf at artcom-gmbh.de Sat Feb 24 00:01:09 2001 From: pf at artcom-gmbh.de (Peter Funk) Date: Sat, 24 Feb 2001 00:01:09 +0100 (MET) Subject: [Python-Dev] RE: Please use __progress__ instead of __future__ (was Other situations like this) In-Reply-To: 
                              
                              from Tim Peters at "Feb 23, 2001 3:24:48 am" Message-ID: 
                              
                              Hi, Tim Peters: [...] > Any statement of the form > > from __future__ import shiny > > becomes unnecessary as soon as shiny's future arrives, at which point the > statement can be removed. The statement is necessary only so long as shiny > *is* in the future. So the name is thoroughly appropriate. [...] Obviously you assume, that software written in Python will be bundled only with one certain version of the Python interpreter. This might be true for Windows, where Python is no integral part of base operating system. Not so for Linux: There application developers have to support a range of versions covering at least 3 years, if they don't want to start fighting against the preinstalled Python. A while ago I decided to drop the support for Python 1.5.1 and earlier in our software. This has bitten me bad: Upgrading the Python 1.5.1 installation to 1.5.2 on SuSE Linux 6.0 machine at a customer site resulted in a nightmare. Obviously I would have saved half of the night, if I had decided to install a development system (GCC, libs ...) there and would have Python recompiled from source instead of trying to incrementally upgrade parts of the system using the precompiled binary RPMs delivered by SuSE). Now I have learned my lessons and I will not drop support for 1.5.2 until 2003. BTW: SuSE will start to ship SuSE Linux 7.1 just now in the US (it is available here since Feb 10th). AFAIK this is the first Linux distribution coming with Python 2.0 as the default Python. Every other commercially used Linux system out there probably has Python 1.5.2 or older. > Given the rules I already posted, it will be very easy to write a Python > tool to identify obsolete __future__ imports and remove them (if you want). [...] Hmmm... If my Python apps have to support for example Python from version 2.1 up to 2.5 or 2.6 in 2003, I certainly have to leave the 'from __future__ import ...'-statements alone and can't remove them without sacrifying backward compatibility to the Python interpreter which made this feature available for the first time. At this time __future__ will contain features, that are 2.5 years old. BTW: We will abstain from using string methods, augmented assignments and list compr. for at least the next two years out of similar reasons. On the other hand I would never bother with IO-Port hacking to get a 200Hz and 1.5 second long "beep" out of the PC builtin speaker... ;-) Have a nice weekend and good night, Peter From akuchlin at mems-exchange.org Sat Feb 24 00:09:37 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Fri, 23 Feb 2001 18:09:37 -0500 Subject: [Python-Dev] Re: [Distutils] distutils, uninstaller In-Reply-To: <03f201c09db6$cf201990$e000a8c0@thomasnotebook>; from thomas.heller@ion-tof.com on Fri, Feb 23, 2001 at 05:36:44PM +0100 References: <03f201c09db6$cf201990$e000a8c0@thomasnotebook> Message-ID: <20010223180937.A5178@ute.cnri.reston.va.us> On Fri, Feb 23, 2001 at 05:36:44PM +0100, Thomas Heller wrote: >I've uploaded the bdist_wininst uninstaller >patch to sourceforge: >http://sourceforge.net/patch/?func=detailpatch&patch_id=103948&group_id=5470 Can anyone take a look at the patch just as a sanity check? I can't really comment on it, but if someone else gives it a look, Thomas can go ahead and check it in. >Another thing: Shouldn't the distutils version number change >before the beta? I suggest going from 1.0.1 to 1.0.2. Good point. It doesn't look like beta1 will be happening until late next week due to the nested scoping changes, but I'll do that before the release. --amk From pedroni at inf.ethz.ch Sat Feb 24 00:16:55 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Sat, 24 Feb 2001 00:16:55 +0100 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) References: 
                              
                              Message-ID: <005801c09dee$b7fc0ca0$f979fea9@newmexico> Hi. [Tim Peters] > > 2. How should the interactive interpreter be handled? > > You're kidding. I thought we agreed to drop the interactive interpreter for > 2.1? (Let's *really* give 'em something to carp about ...) > > > I presume if you type > > >>> from __future__ import nested_scopes > > > > That everything thereafter will be compiled with nested scopes. > > That's my guess too, of course. > > > This ends up being a little tricky, because the interpreter has to > > hang onto this information and tell the compiler about it. > > Ditto for > > python -i some_script.py This make sense but I guess people will ask for a way to disable the feature after a while in the session, even trickier. > where some_script.py contains a magical import. OTOH, does exec-compiled > (or execfile-ed) code start with a clean slate, or inherent the setting of > the module from which it's exec[file]'ed? I think the latter has to be > true. I disagree, although this reduces the number of places where one has to delete from __future__ import when _future_ is here, for some uses of execfile the original program has just little control over what is in the executed file I guess, better having people being explicit there about what they want. And this way we don't have to invent a way for forcing disabling the feature (at least not because of the inherited default problems). exec should not be that different. Or we need an even more complicated mechanismus? like your proposed import not. regards, Samuele Pedroni. From thomas at xs4all.net Sat Feb 24 00:26:51 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sat, 24 Feb 2001 00:26:51 +0100 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Fri, Feb 23, 2001 at 06:00:59PM -0500 References: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010224002651.D16781@xs4all.nl> On Fri, Feb 23, 2001 at 06:00:59PM -0500, Jeremy Hylton wrote: > Hmmmm... I'm not yet sure how to deduce indent level 0 inside the > parser. Uhm, why are we adding that restriction anyway, if it's hard for the parser/compiler to detect it ? I think I'd like to put them in try/except or if/else clauses, for fully portable code. While on the subject, a way to distinguish between '__future__ not found' and '__future__.feature not found', other than hardcoding the minimal version might be nice. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mwh21 at cam.ac.uk Sat Feb 24 01:10:00 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 24 Feb 2001 00:10:00 +0000 Subject: [Python-Dev] RE: Please use __progress__ instead of __future__ (was Other situations like this) In-Reply-To: "Tim Peters"'s message of "Fri, 23 Feb 2001 03:24:48 -0500" References: 
                              
                              Message-ID: 
                              
                              "Tim Peters" 
                              
                              writes: > [Peter Funk] > > I believe __future__ is a bad name. What appears today as the bright > > shining future will be the distant dusty past of tomorrow. But the > > name of the module is not going to change anytime soon. right? > > The name of what module? > > Any statement of the form > > from __future__ import shiny > > becomes unnecessary as soon as shiny's future arrives, at which point the > statement can be removed. The statement is necessary only so long as shiny > *is* in the future. So the name is thoroughly appropriate. Ever been to Warsaw? There's the Old Town, which was built around 1650. Then there's the New Town, which was built around 1700. (The dates may be wrong). I think this is what Peter was talking about. also-see-New-College-Oxford-ly y'rs M. -- MAN: How can I tell that the past isn't a fiction designed to account for the discrepancy between my immediate physical sensations and my state of mind? -- The Hitch-Hikers Guide to the Galaxy, Episode 12 From mwh21 at cam.ac.uk Sat Feb 24 01:14:52 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 24 Feb 2001 00:14:52 +0000 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: "Eric S. Raymond"'s message of "Thu, 22 Feb 2001 19:14:50 -0500" References: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> <200102222326.SAA18443@cj20424-a.reston1.va.home.com> <20010222191450.B15506@thyrsus.com> Message-ID: 
                              
                              "Eric S. Raymond" 
                              
                              writes: > Guido van Rossum 
                              
                              : > > > > Language theorists love [exec]. > > > > > > Really? I'd have thought language theorists would be the ones > > > who hate it, given all the problems it causes... > > > > Depends on where they're coming from. Or maybe I should have said > > Lisp folks... > > You are *so* right, Guido! :-) I almost commented about this in reply > to Greg's post earlier. > > Crusty old LISP hackers like me tend to be really attached to being > able to (a) lash up S-expressions that happen to be LISP function calls on > the fly, and then (b) hand them to eval. "No separation between code > and data" is one of the central dogmas of our old-time religion. Really? I thought the "no separation between code and data" thing more referred to macros than anything else. Having the full language around at compile time is one of the things that really separates Common Lisp from anything else. I don't think I've ever used #'eval in CL code - it tends to bugger up efficiency even more than the Python version does, for one thing! (eval-when (:compile-toplevel))-ly y'rs M. -- In many ways, it's a dull language, borrowing solid old concepts from many other languages & styles: boring syntax, unsurprising semantics, few automatic coercions, etc etc. But that's one of the things I like about it. -- Tim Peters, 16 Sep 93 From esr at thyrsus.com Sat Feb 24 01:21:39 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 23 Feb 2001 19:21:39 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: 
                              
                              ; from mwh21@cam.ac.uk on Sat, Feb 24, 2001 at 12:14:52AM +0000 References: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> <200102222326.SAA18443@cj20424-a.reston1.va.home.com> <20010222191450.B15506@thyrsus.com> 
                              
                              Message-ID: <20010223192139.A10945@thyrsus.com> Michael Hudson 
                              
                              : > > Crusty old LISP hackers like me tend to be really attached to being > > able to (a) lash up S-expressions that happen to be LISP function calls on > > the fly, and then (b) hand them to eval. "No separation between code > > and data" is one of the central dogmas of our old-time religion. > > Really? I thought the "no separation between code and data" thing > more referred to macros than anything else. Another implication; and, as you say, more often actually useful. -- 
                              Eric S. Raymond Gun Control: The theory that a woman found dead in an alley, raped and strangled with her panty hose, is somehow morally superior to a woman explaining to police how her attacker got that fatal bullet wound. -- L. Neil Smith From tim.one at home.com Sat Feb 24 01:48:50 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 19:48:50 -0500 Subject: [Python-Dev] RE: Please use __progress__ instead of __future__ (was Other situations like this) In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Tim] > Any statement of the form > > from __future__ import shiny > > becomes unnecessary as soon as shiny's future arrives, at which > point the statement can be removed. The statement is necessary > only so long as shiny *is* in the future. So the name is > thoroughly appropriate. [Peter Funk] > Obviously you assume, that software written in Python will be bundled > only with one certain version of the Python interpreter. Not really. I think it's more the case that you're viewing this gimmick through the eyes of your particular problems, and criticizing because it don't solve them. However, it wasn't intended to solve them. > This might be rue for Windows, where Python is no integral part of > base operating system. Not so for Linux: There application > developers have to support a range of versions covering at least > 3 years, if they don't want to start fighting against the preinstalled > Python. It's not true that Windows is devoid of compatibility problems. But Windows Python takes a different approach: we even rename the Windows Python DLLs with each release. That way any number of incompatible Pythons can coexist peacefully (this isn't trivial under Windows, because we have to install the core DLL in a specific magic directory). A serious Python app developed for Windows generally ships with the specific Python it wants, too (not unique to Python, of course, serious apps of all kinds ship with the support softare they need on Windows, up to and sometimes even including the basic MS C runtime libs). How people on other OSes choose to deal with this is up to them. If you find the Linux approach lacking, I believe you, but the "magical import" mechanism is too feeble a base on which to pin your hopes. Get serious about this! Write a PEP that will truly address your problems. This one does not; I don't even see that it's *related* to your problems. > ... > BTW: SuSE will start to ship SuSE Linux 7.1 just now in the US (it > is available here since Feb 10th). AFAIK this is the first Linux > distribution coming with Python 2.0 as the default Python. Every other > commercially used Linux system out there probably has Python 1.5.2 > or older. Yet another reason to prefer Windows 
                              
                              . > ... > Hmmm... If my Python apps have to support for example Python from > version 2.1 up to 2.5 or 2.6 in 2003, I certainly have to leave the > 'from __future__ import ...'-statements alone and can't remove them > without sacrifying backward compatibility to the Python interpreter > which made this feature available for the first time. The only way to write a piece of code that runs under all of 2.1 thru 2.6 is to avoid any behavior whatsoever that's specific to some proper subset of those versions. That's hard, and I don't think "from __future__" even *helps* with that. But it wasn't meant to. It was meant to make life easier for people who *do* upgrade in a timely fashion, in accord with at least the spirit of the existing PEPs on the topic. > At this time __future__ will contain features, that are 2.5 years > old. And ...? That is, what of it? In 1000 years, it will contain features that are 1000 years old. So? Else code written now and never purged of obsolete __future__s would break 1000 years from now. You can fault the scheme on many bases, but not on the basis that it creates new incompatibility problems. Leaving the old __future__s in will help a little in the other direction: code that announces it relies on a __future__ F will reliably fail at compile-time if run under a release less than F's OptionalRelease value. > BTW: We will abstain from using string methods, augmented assignments > and list compr. for at least the next two years out of similar reasons. If that's the best you think can you do, so it goes. It would be nice to think of a better way. But this isn't the right gimmick, and that it doesn't solve your problems doesn't mean it fails to solve anyone's problems. > On the other hand I would never bother with IO-Port hacking to get a > 200Hz and 1.5 second long "beep" out of the PC builtin speaker... ;-) That's compatibility: it worked before under NT and 2000, but not under Win9X, and it has high newbie appeal (I dove it into after making excuses about Win9X Beep() for the umpteenth time on the Tutor list). If you want to make Linux attractive to newbies, implementing Beep() for it too would be an excellent step. If you like, I'll reserve from __future__ import MakeLinuxBearableForNewbies right now 
                              
                              . From pedroni at inf.ethz.ch Sat Feb 24 02:02:53 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Sat, 24 Feb 2001 02:02:53 +0100 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) References: 
                              
                              Message-ID: <004501c09dfd$9c926360$f979fea9@newmexico> After maybe too short thinking here's an idea along the line keep it simple: 1 ) from __future__ import foofeature * I imagine is more for semantic and syntax changes so it is better if near too the code assuming or needing them. So there should be no defaults, each compilation unit (module, exec string, ...) that need the feature should explicitly contain the from import ... (at least for hard-coded execs I see few need to require nested scopes in them so that's not a big problem, for other future features I don't know). * It should be allowed only at module scope indent 0, all post 2.1 compiler will be able to deal with __future__, so putting a try around the import make few sense, a compile-time error will be issued if the feature is not supported. For pre 2.1 compiler I see few possibilities of writing backward compatible code using the from __future__ import , unless one want following to work: try: from __future__ import foofeature # code needing new syntax or assuming new semantic except ImportError: # old style code if the change does not involve syntax this code will work with a pre 2.1 compiler, but >2.1 compilers should be able to recognize the idiom or use some kind of compile-time evalutation, both IMO will require a bunch of special rules and are not that easy to implement. Backward and more compiler friendly code can be written using package or module wrappers: try: import __future__ # check if feature is there from module_using_fetature import * # this will contain from __future__ import feature execpt ImportError: from module_not_using_feature import * 2) interactive mode: * respecting the above rules >>>from __future__ import featujre will activate the feature only in the one-line compilation unit => it has no effect, this can be confusing but it's a coherent behaviour, the other way people will be tempted to ask why importing a feature in a file does not influence the others... At the moment I see two solutions: - supporting the following idiom (I mean everywhere): at top-level indent 0 if 1: from __future__ import foofeature .... - having a cmd-line switch that says what futures are on for the compilation units entered at top-level in an interactive session. This is just a sketch and a material for further reflection. OTOH the implicit other proposal is that if code X will endup being executed having its global namespaces containing a feature cookie coming from __future__ because of an explicit "from import" or because so is the global namespace passed to exec,etc . ; then X should be compiled with the feature on. regards, Samuele Pedroni From jeremy at alum.mit.edu Sat Feb 24 00:30:32 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 18:30:32 -0500 (EST) Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <20010224002651.D16781@xs4all.nl> References: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> <20010224002651.D16781@xs4all.nl> Message-ID: <14998.62104.55786.683789@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TW" == Thomas Wouters 
                              
                              writes: TW> On Fri, Feb 23, 2001 at 06:00:59PM -0500, Jeremy Hylton wrote: >> Hmmmm... I'm not yet sure how to deduce indent level 0 inside >> the parser. TW> Uhm, why are we adding that restriction anyway, if it's hard for TW> the parser/compiler to detect it ? I think I'd like to put them TW> in try/except or if/else clauses, for fully portable code. We want this to be a simple compiler directive, rather than something that can be turned on or off at runtime. If it were allowed inside an if/else statement, the compiler, it would become something more like a runtime flag. It sounds like you want the feature to be enabled only if the import is actually executed. But that can't work for compile-time directives, because the code has got to be compiled before we find out if the statement is executed. The restriction eliminates weird cases where it makes no sense to use this feature. Why try to invent a meaning for the nonsense code: if 0: from __future__ import nested_scopes TW> While TW> on the subject, a way to distinguish between '__future__ not TW> found' and '__future__.feature not found', other than hardcoding TW> the minimal version might be nice. There will definitely be a difference! Presumably all versions of Python after and including 2.1 will know about __future__. In those cases, the compiler will complain if feature is no defined. The complaint can be fairly specific: "__future__ feature curly_braces is not defined." In Python 2.0 and earlier, you'll just get an ImportError: No module named __future__. I'm assuming the compiler won't need to know anything about the values that are bound in __future__. It will just check to see whether the name is defined. Jeremy From tim.one at home.com Sat Feb 24 02:18:09 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 20:18:09 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <005801c09dee$b7fc0ca0$f979fea9@newmexico> Message-ID: 
                              
                              >> Ditto for >> >> python -i some_script.py [Samuele Pedroni] > This make sense but I guess people will ask for a way to disable > the feature after a while in the session, even trickier. The purpose is to let interested people use new features "early", not to let people jerk off. That is, they can ask all they want 
                              
                              . >> [Tim sez exec and execfile should inherit the module's setting] > I disagree, although this reduces the number of places where one > has to delete from __future__ import when _future_ is here, That isn't the intent. The intent is that a module containing from __future__ import f is announcing it *wants* future semantics for f. Therefore the module should act, in *all* respects (incl. exec and execfile), as if the release were already the future one in which f is no longer optional. If exec, eval or execfile continue to act like the older release, the module isn't getting the semantics it specifically asked for, and the user isn't getting a correct test of future functionality. > for some uses of execfile the original program has just little > control over what is in the executed file I guess, Then they may have deeper problems than this gimmick can address, but they're not going to find out whether the files they're execfile'ing *will* have a problem in the future unless the module asking for future semantics gets future semantics. > better having people being explicit there about what they want. They already are being explicit: they get future semantics when and only when they include a from__future__ thingie. > And this way we don't have to invent a way for forcing disabling > the feature (at least not because of the inherited > default problems). There is *no* intent here that a single module be able to pick and choose different behaviors in different contexts. The purpose is to allow early testing and development of code to ensure it will work correctly in a future release. That's all. > ... > Or we need an even more complicated mechanismus? like your > proposed import not. I doubt core Python will ever support "moving back in time" (a heavily conditionalized code base is much harder to maintain -- ask Jeremy how much fun he's having trying to make this optional *now*). May (or may not) be an interesting idea for repackagers to consider, though. From tim.one at home.com Sat Feb 24 02:23:19 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 20:23:19 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: 
                              
                              Jeremy] > Hmmmm... I'm not yet sure how to deduce indent level 0 inside the > parser. > > Were we going to allow? > > try: > from __future__ import curly_braces > except ImportError: > ... Sounds like that's easier to implement <0.5 wink>. Sure. So let's take the human view of "module-level" instead of the tabnanny view after all. That way I don't have to change the words in the proto-PEP either 
                              
                              . That means: if x: from __future__ import nested_scopes should work too. Does it also mean exec "from __future__ import nested_scopes\n" should work? No. From tim.one at home.com Sat Feb 24 03:07:32 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 21:07:32 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <20010224002651.D16781@xs4all.nl> Message-ID: 
                              
                              [Jeremy Hylton] > Hmmmm... I'm not yet sure how to deduce indent level 0 inside the > parser. [Thomas Wouters] > Uhm, why are we adding that restriction anyway, if it's hard for the > parser/compiler to detect it ? I talked with Jeremy, and turns out it's not. > I think I'd like to put them in try/except or if/else clauses, for > fully portable code. And, sorry, but I take back saying that we should allow that. We shouldn't. Despite that it looks like an import statement (and actually *is* one, for that matter), the key info is extracted at compile time. So in stuff like if x: from __future__ import alabaster_weenoblobs whether or not alabaster_weenoblobs is in effect has nothing to do with the runtime value of x. So it's plain Bad to allow it to look as if it did. The only stuff that can textually precede: from __future__ import f is: + The module docstring (if any). + Comments. + Blank lines. + Other instances of from __future__. This also makes clear that one of these things applies to the entire module. Again, the thrust of this is *not* to help in writing portable code. It's to help users upgrade to the next release, in two ways: (1) by not breaking their code before the next release; and, (2) to let them migrate their code to next-release semantics incrementally. Note: "next release" means whatever MandatoryRelease is associated with the feature of interest. "Cross version portable code" is a more pressing problem for some, but is outside the scope of this gimmick. *This* gimmick is something we can actually do <0.5 wink>. From thomas at xs4all.net Sat Feb 24 04:34:23 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sat, 24 Feb 2001 04:34:23 +0100 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <14998.62104.55786.683789@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Fri, Feb 23, 2001 at 06:30:32PM -0500 References: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> <20010224002651.D16781@xs4all.nl> <14998.62104.55786.683789@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010224043423.F16781@xs4all.nl> On Fri, Feb 23, 2001 at 06:30:32PM -0500, Jeremy Hylton wrote: > >>>>> "TW" == Thomas Wouters 
                              
                              writes: > TW> On Fri, Feb 23, 2001 at 06:00:59PM -0500, Jeremy Hylton wrote: > >> Hmmmm... I'm not yet sure how to deduce indent level 0 inside > >> the parser. > TW> Uhm, why are we adding that restriction anyway, if it's hard for > TW> the parser/compiler to detect it ? I think I'd like to put them > TW> in try/except or if/else clauses, for fully portable code. > If it were allowed inside an if/else statement, the compiler, it would > become something more like a runtime flag. It sounds like you want the > feature to be enabled only if the import is actually executed. But that > can't work for compile-time directives, because the code has got to be > compiled before we find out if the statement is executed. Right, I don't really want them in if/else blocks, you're right. Try/except would be nice, though. > TW> While > TW> on the subject, a way to distinguish between '__future__ not > TW> found' and '__future__.feature not found', other than hardcoding > TW> the minimal version might be nice. > There will definitely be a difference! > Presumably all versions of Python after and including 2.1 will know > about __future__. In those cases, the compiler will complain if > feature is no defined. The complaint can be fairly specific: > "__future__ feature curly_braces is not defined." Will this be a warning, or an error/exception ? Must-stop-working-sleep-is-calling-ly y'rs, ;) -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Sat Feb 24 06:51:57 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 24 Feb 2001 00:51:57 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: <14998.31575.97664.422182@anthem.wooz.org> Message-ID: 
                              
                              Gimme a PEP number, and I'll post this to the real users too 
                              
                              . PEP: ? Title: Back to the __future__ Version: $Revision: 1.0 $ Author: Tim Peters 
                              
                              Python-Version: 2.1 Status: ? Type: Standards Track Post-History: Motivation From time to time, Python makes an incompatible change to the advertised semantics of core language constructs, or changes their accidental (implementation-dependent) behavior in some way. While this is never done capriciously, and is always done with the aim of improving the language over the long term, over the short term it's contentious and disrupting. The "Guidelines for Language Evolution" PEP [1] suggests ways to ease the pain, and this PEP introduces some machinery in support of that. The "Statically Nested Scopes" PEP [2] is the first application, and will be used as an example here. Intent When an incompatible change to core language syntax or semantics is being made: 1. The release C that introduces the change does not change the syntax or semantics by default. 2. A future release R is identified in which the new syntax or semantics will be enforced. 3. The mechanisms described in the "Warning Framework" PEP [3] are used to generate warnings, whenever possible, about constructs or operations whose meaning may[4] change in release R. 4. The new future_statement (see below) can be explicitly included in a module M to request that the code in module M use the new syntax or semantics in the current release C. So old code continues to work by default, for at least one release, although it may start to generate new warning messages. Migration to the new syntax or semantics can proceed during that time, using the future_statement to make modules containing it act as if the new syntax or semantics were already being enforced. Syntax A future_statement is simply a from/import statement using the reserved module name __future__: future_statement: "from" "__future__" "import" feature ["as" name] ("," feature ["as" name])* feature: identifier In addition, all future_statments must appear near the top of the module. The only lines that can appear before a future_statement are: + The module docstring (if any). + Comments. + Blank lines. + Other future_statements. Example: """This is a module docstring.""" # This is a comment, preceded by a blank line and followed by # a future_statement. from __future__ import nested_scopes from math import sin from __future__ import alabaster_weenoblobs # compile-time error! # That was an error because preceded by a non-future_statement. Semantics A future_statement is recognized and treated specially at compile time: changes to the semantics of core constructs are often implemented by generating different code. It may even be the case that a new feature introduces new incompatible syntax (such as a new reserved word), in which case the compiler may need to parse the module differently. Such decisions cannot be pushed off until runtime. For any given release, the compiler knows which feature names have been defined, and raises a compile-time error if a future_statement contains a feature not known to it[5]. The direct runtime semantics are the same as for any import statement: there is a standard module __future__.py, described later, and it will be imported in the usual way at the time the future_statement is executed. The *interesting* runtime semantics depend on the feature(s) "imported" by the future_statement(s) appearing in the module. Since a module M containing a future_statement naming feature F explicitly requests that the current release act like a future release with respect to F, any code interpreted dynamically from an eval, exec or execfile executed by M will also use the new syntax or semantics associated with F. A future_statement appearing "near the top" (see Syntax above) of code interpreted dynamically by an exec or execfile applies to the code block executed by the exec or execfile, but has no further effect on the module that executed the exec or execfile. Note that there is nothing special about the statement: import __future__ [as name] That is not a future_statement; it's an ordinary import statement, with no special syntax restrictions or special semantics. Interactive shells may pose special problems. The intent is that a future_statement typed at an interactive shell prompt affect all code typed to that shell for the remaining life of the shell session. It's not clear how to achieve that. Example Consider this code, in file scope.py: x = 42 def f(): x = 666 def g(): print "x is", x g() f() Under 2.0, it prints: x is 42 Nested scopes[2] are being introduced in 2.1. But under 2.1, it still prints x is 42 and also generates a warning. In 2.2, and also in 2.1 *if* "from __future__ import nested_scopes" is included at the top of scope.py, it prints x is 666 Standard Module __future__.py Lib/__future__.py is a real module, and serves three purposes: 1. To avoid confusing existing tools that analyze import statements and expect to find the modules they're importing. 2. To ensure that future_statements run under releases prior to 2.1 at least yield runtime exceptions (the import of __future__ will fail, because there was no module of that name prior to 2.1). 3. To document when incompatible changes were introduced, and when they will be-- or were --made mandatory. This is a form of executable documentation, and can be inspected programatically via importing __future__ and examining its contents. Each statment in __future__.py is of the form: FeatureName = ReleaseInfo ReleaseInfo is a pair of the form: (OptionalRelease, MandatoryRelease) where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples of the same form as sys.version_info: (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int PY_MINOR_VERSION, # the 1; an int PY_MICRO_VERSION, # the 0; an int PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string PY_RELEASE_SERIAL # the 3; an int ) OptionalRelease records the first release in which from __future__ import FeatureName was accepted. In the case of MandatoryReleases that have not yet occurred, MandatoryRelease predicts the release in which the feature will become part of the language. Else MandatoryRelease records when the feature became part of the language; in releases at or after that, modules no longer need from __future__ import FeatureName to use the feature in question, but may continue to use such imports. MandatoryRelease may also be None, meaning that a planned feature got dropped. No line will ever be deleted from __future__.py. Example line: nested_scopes = (2, 1, 0, "beta", 1), (2, 2, 0, "final", 0) This means that from __future__ import nested_scopes will work in all releases at or after 2.1b1, and that nested_scopes are intended to be enforced starting in release 2.2. Questions and Answers Q: What about a "from __past__" version, to get back *old* behavior? A: Outside the scope of this PEP. Seems unlikely to the author, though. Write a PEP if you want to pursue it. Q: What about incompatibilites due to changes in the Python virtual machine? A: Outside the scope of this PEP, although PEP 5[1] suggests a grace period there too, and the future_statement may also have a role to play there. Q: What about incompatibilites due to changes in Python's C API? A: Outside the scope of this PEP. Q: I want to wrap future_statements in try/except blocks, so I can use different code depending on which version of Python I'm running. Why can't I? A: Sorry! try/except is a runtime feature; future_statements are primarily compile-time gimmicks, and your try/except happens long after the compiler is done. That is, by the time you do try/except, the semantics in effect for the module are already a done deal. Since the try/except wouldn't accomplish what it *looks* like it should accomplish, it's simply not allowed. We also want to keep these special statements very easy to find and to recognize. Note that you *can* import __future__ directly, and use the information in it, along with sys.version_info, to figure out where the release you're running under stands in relation to a given feature's status. Q: Going back to the nested_scopes example, what if release 2.2 comes along and I still haven't changed my code? How can I keep the 2.1 behavior then? A: By continuing to use 2.1, and not moving to 2.2 until you do change your code. The purpose of future_statement is to make life easier for people who keep keep current with the latest release in a timely fashion. We don't hate you if you don't, but your problems are much harder to solve, and somebody with those problems will need to write a PEP addressing them. future_statement is aimed at a different audience. Copyright This document has been placed in the public domain. References and Footnotes [1] http://python.sourceforge.net/peps/pep-0005.html [2] http://python.sourceforge.net/peps/pep-0227.html [3] http://python.sourceforge.net/peps/pep-0230.html [4] Note that this is "may" and not "will": better safe than sorry. Of course spurious warnings won't be generated when avoidable with reasonable cost. [5] This ensures that a future_statement run under a release prior to the first one in which a given feature is known (but >= 2.1) will raise a compile-time error rather than silently do a wrong thing. If transported to a release prior to 2.1, a runtime error will be raised because of the failure to import __future__ (no such module existed in the standard distribution before the 2.1 release, and the double underscores make it a reserved name). Local Variables: mode: indented-text indent-tabs-mode: nil End: From tim.one at home.com Sat Feb 24 07:06:30 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 24 Feb 2001 01:06:30 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <20010224043423.F16781@xs4all.nl> Message-ID: 
                              
                              [Thomas Wouters] > ... > Right, I don't really want them in if/else blocks, you're right. > Try/except would be nice, though. Can you give a specific example of why it would be nice? Since this is a compile-time gimmick, I can't imagine that it would do anything but confuse the essential nature of this gimmick. Note that you *can* do excuciating stuff like: try: import __future__ except: import real_old_fangled_code as guacamole else: if hasattr(__future__, "nested_scopes"): import new_fangled_code as guacamole else: import old_fangled_code as guacamole but in such a case I expect I'd be much happier just keying off sys.hexversion, or, even better, running a tiny inline test case to *see* what the semantics are. [Jeremy] >> Presumably all versions of Python after and including 2.1 will know >> about __future__. In those cases, the compiler will complain if >> feature is no defined. The complaint can be fairly specific: >> "__future__ feature curly_braces is not defined." [back to Thomas] > Will this be a warning, or an error/exception ? A compile-time exception: when you're asking for semantics the compiler can't give you, the presumption has to favor that you're in big trouble. You can't catch such an exception directly in the same module (because it occurs at compile time), but can catch it if you import the module from elsewhere. But I *suspect* you're trying to solve a problem this stuff isn't intended to address, which is why a specific example would really help. From tim.one at home.com Sat Feb 24 08:54:40 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 24 Feb 2001 02:54:40 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: 
                              
                              Message-ID: 
                              
                              [Tim] > ... > A compile-time exception: when you're asking for semantics the compiler > can't give you, the presumption has to favor that you're in big trouble. > You can't catch such an exception directly in the same module (because it > occurs at compile time), but can catch it if you import the module from > elsewhere. Relatedly, you could do: try: compile("from __future__ import whatever", "", "exec") except whatever2: whatever3 else: whatever4 Then the future_stmt's compile-time is your module's runtime. still-looks-pretty-useless-to-me-though-ly y'rs - tim From guido at digicool.com Sat Feb 24 17:44:54 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 24 Feb 2001 11:44:54 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: Your message of "Sat, 24 Feb 2001 04:34:23 +0100." <20010224043423.F16781@xs4all.nl> References: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> 
                              
                              <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> <20010224002651.D16781@xs4all.nl> <14998.62104.55786.683789@w221.z064000254.bwi-md.dsl.cnc.net> <20010224043423.F16781@xs4all.nl> Message-ID: <200102241644.LAA03659@cj20424-a.reston1.va.home.com> > Right, I don't really want them in if/else blocks, you're right. Try/except > would be nice, though. Can't allow that. See Tim's draft PEP; allowing tis makes the meaning too muddy. I suppose you want this because you think you may have code that wants to use a new feature when it exists, but which should still work when it doesn't. The solution, given the constraints on the placement of the __future__ import, is to place the code that uses the new feature in a separate module and have another separate module that does not use the new feature; then a parent module can try to import the first one and if that fails, import the second one. But I bet that in most cases you'll be better off coding without dependence on the new feature if your code needs to be backwards compatible! --Guido van Rossum (home page: http://www.python.org/~guido/) > > Presumably all versions of Python after and including 2.1 will know > > about __future__. In those cases, the compiler will complain if > > feature is no defined. The complaint can be fairly specific: > > "__future__ feature curly_braces is not defined." > > Will this be a warning, or an error/exception ? Error of course. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Sat Feb 24 17:54:27 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 24 Feb 2001 11:54:27 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: Your message of "Sat, 24 Feb 2001 00:51:57 EST." 
                              
                              References: 
                              
                              Message-ID: <200102241654.LAA03687@cj20424-a.reston1.va.home.com> > Since a module M containing a future_statement naming feature F > explicitly requests that the current release act like a future release > with respect to F, any code interpreted dynamically from an eval, exec > or execfile executed by M will also use the new syntax or semantics > associated with F. This means that a run-time flag must be available for inspection by eval() and execfile(), at least. I'm not sure that I agree with this for execfile() though -- that's often used by mechanisms that emulate module imports, and there it would be better if it started off with all features reset to their default. I'm also not sure about exec and eval() -- it all depends on the reason why exec is being invoked. Plus, exec and eval() also take a compiled code object, and there it's too late to change the future. Which leads to the question: should compile() also inherit the future settings? It's certainly a lot easier to implement if exec c.s. are *not* affected by the future selection of the invoking environment. And if you *want* it, at least for exec, you can insert the future_statement in front of the executed code string. > Interactive shells may pose special problems. The intent is that a > future_statement typed at an interactive shell prompt affect all code > typed to that shell for the remaining life of the shell session. It's > not clear how to achieve that. The same flag that I mentioned above can be used here -- basically, we can treat each interactive command as an "exec". Except that this time, the command that is the future_statement *does* export its flag to the invoking environment. Plus, I've made a good case against the flag. :-( --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Sun Feb 25 23:44:09 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 25 Feb 2001 17:44:09 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: <200102241654.LAA03687@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Tim] > Since a module M containing a future_statement naming feature F > explicitly requests that the current release act like a > future release with respect to F, any code interpreted dynamically > from an eval, exec or execfile executed by M will also use the > new syntax or semantics associated with F. [Guido] > This means that a run-time flag must be available for inspection by > eval() and execfile(), at least. eval(), compile() and input() too. Others? > I'm not sure that I agree with this for execfile() though -- that's > often used by mechanisms that emulate module imports, and there it > would be better if it started off with all features reset to their > default. Code emulating module imports is rare. People writing such mechanisms had better be experts! I don't want to warp the normal case to cater to a handful of deep-magic propeller-heads (they can take care of themselves). > I'm also not sure about exec and eval() -- it all depends on the > reason why exec is being invoked. We're not mind-readers, though. Best to give a simple (to understand) rule that caters to normal cases and let the experts worm around the cases where they didn't mean what they said; e.g., if for some reason they want their entire module to use nested scopes *except* for execfile, they can move the execfile into another module and not ask for nested scopes at the top of the latter, then call the latter from the original module. But then they're no longer getting a test of what's going to happen when nested scopes become The Rule, either. Note too that this mechanism is intended to be used for more than just the particular case of nested scopes. For example, consider changing the meaning of integer division. If someone asks for that, then of course they want exec "i = 1/2\n" or eval("1/2") within the module not to compute 0. There is no mechanism in the PEP now to make life easier for people who don't really want what they asked for. Perhaps there should be. But if you believe (as I intended) that the PEP is aimed at making it easier to prepare code for a future release, all-or-nothing for a module is really the right behavior. > Plus, exec and eval() also take a compiled code object, and there it's > too late to change the future. That's OK; the PEP *intended* to restrict this to cases where the gimmicks in question also compile the code from strings. I'll change that. > Which leads to the question: should compile() also inherit the future > settings? If it doesn't, the module containing it is not going to act like it will in the MandatoryRelease associated with the __future__ requested. And in that case, I don't believe __future__ would be doing its primary job: it's not helping me find out how the module *will* act. > It's certainly a lot easier to implement if exec c.s. are *not* > affected by the future selection of the invoking environment. And if > you *want* it, at least for exec, you can insert the future_statement > in front of the executed code string. But not for eval() (see above), or input(). >> Interactive shells may pose special problems. The intent is that a >> future_statement typed at an interactive shell prompt affect all code >> typed to that shell for the remaining life of the shell session. It's >> not clear how to achieve that. > The same flag that I mentioned above can be used here -- basically, we > can treat each interactive command as an "exec". Except that this > time, the command that is the future_statement *does* export its flag > to the invoking environment. Plus, I've made a good case against the > flag. :-( I think you've pointed out that *sometimes* people may not want what it does, and that implementing it is harder than not implementing it. I favor making the rules as easy as possible for people who want to know how their module will behave after the feature is mandatory, and believe that all-or-nothing is clearly a better default. In either case, changing the default on a pick-or-choose basis within a single module would require additional gimmicks not in the current PEP (e.g., maybe more optional flags to eval() etc; or maybe some new builtin function to toggle it; or maybe more pseudo-imports; or ...). I'm not convinced more gimmicks are *needed*, though, and don't want to see this PEP bloat beyond my original intent for it. it's-a-feeble-mechanism-aimed-at-a-specific-goal-ly y'rs - tim From guido at digicool.com Mon Feb 26 04:14:13 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 25 Feb 2001 22:14:13 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: Your message of "Sun, 25 Feb 2001 17:44:09 EST." 
                              
                              References: 
                              
                              Message-ID: <200102260314.WAA16873@cj20424-a.reston1.va.home.com> > Code emulating module imports is rare. People writing such mechanisms had > better be experts! I don't want to warp the normal case to cater to a > handful of deep-magic propeller-heads (they can take care of themselves). OK. I'm not completely convinced, but at least 60%, and that's enough. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Mon Feb 26 08:01:26 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 26 Feb 2001 02:01:26 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: <200102260314.WAA16873@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              [Tim] >> Code emulating module imports is rare. People writing such >> mechanisms had better be experts! I don't want to warp the >> normal case to cater to a handful of deep-magic propeller-heads >> (they can take care of themselves). [Guido] > OK. I'm not completely convinced, but at least 60%, and that's > enough. Oh, I'm not convinced either. But eval/exec/compile/input/execfile are rare operations (in frequency of occurrence per Kline of code), and I don't want that very tangled tail wagging this dog. I don't think either of us will be wholly convinced in either direction without feedback from the beta. I *have* convinced myself tabnanny will work 
                              
                              . But not doctest. doctest basically simulates an interactive shell session one statement at a time, and a new shell session for each docstring (not stmt). My mind simply boggles at imagining all the extra machinery that would need to be in place to make that "work" in all conceivable cases. The __future__ choices doctest itself makes should have no effects on the code it's simulating, but the code it's simulating *should* be affected by the __future__ choices of the module passed to doctest.testmod(); so, at a minimum, it would appear to require a standard way to query a module object for its set of __future__ choices, and an additional argument to compile() allowing to force that set of choices, *and* a way for doctest to tell compile() "oh, ya, if you happen to compile a __future__ statement, and I later execute the code you compiled, make that persist until I tell you to stop" (else simulated __future__ statements won't work as expected). Perhaps those are widespread needs too, but, I doubt it, and I don't think we need to solve the entire problem today regardless. From nas at arctrix.com Mon Feb 26 16:42:34 2001 From: nas at arctrix.com (nas at arctrix.com) Date: Mon, 26 Feb 2001 07:42:34 -0800 Subject: [Python-Dev] GC and Vladimir's obmalloc Message-ID: <20010226074234.A31518@glacier.fnational.com> Executive Summary: obmalloc will allow more efficient GC and we should try hard to get it into 2.1. I've finally spent some time looking at obmalloc and thinking about how to iterate the GC. The advantage would be that objects managed by obmalloc would no longer have to kept in a linked list. That saves both time and memory. I think the right way to do this is to have obmalloc kept track of two separate heaps. One would be for "atomic" objects, the other for "container" objects. I haven't yet figured out how to implement this yet. A lower level malloc interface that takes a heap structure as an argument is an obvious solution. When the GC runs it needs to find container objects. Since obmalloc only deals with blocks of 256 bytes or less, large containers would still have to be stored in a linked list. The rest can be found by searching the obmalloc container heap. Searching the heap is fairly easy. The heap is an array of pointers to pool lists. The only trick is figuring out which parts of the pools are allocated. I think adding the invariant ob_type = NULL means object not allocated is a good solution. That pointer could be set to NULL when the object is deallocated which would also be good for catching bugs. If we pay attention to pool->ref.count we don't even have to set those pointers for a newly allocated pool. Some type of GC locking will probably have to be added (we don't want the collector running when objects are in inconsistent states). I think the GC state (an int for each object) for obmalloc objects should be stored separately. Each pool header could have a pointer to an array of ints. This array could be allocated lazily when the GC runs. The advantages would be better cache behavior and less memory use if GC is disabled. Crude generational collection could be done by doing something like treating the first partially used pool in each size class as generation 0, other partially used pools and the first used pool as generation 1, and all other non-free pools as generation 2. Is the only issue with obmalloc treading? If so, what do we do to resolve this? Neil From guido at digicool.com Mon Feb 26 16:46:59 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 26 Feb 2001 10:46:59 -0500 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: Your message of "Mon, 26 Feb 2001 07:42:34 PST." <20010226074234.A31518@glacier.fnational.com> References: <20010226074234.A31518@glacier.fnational.com> Message-ID: <200102261546.KAA19326@cj20424-a.reston1.va.home.com> > Executive Summary: obmalloc will allow more efficient GC and we > should try hard to get it into 2.1. Can you do it before the 2.1b1 release? We're planning that for this Thursday, May 1st. Three days! > Is the only issue with obmalloc treading? If so, what do we do to > resolve this? 1. Yes, I think so. 2. It currently relies on the global interpreter lock. That's why we want to make it an opt-in configuration option (for now). Does that work with your proposed GC integration? --Guido van Rossum (home page: http://www.python.org/~guido/) From nas at arctrix.com Mon Feb 26 17:32:17 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 26 Feb 2001 08:32:17 -0800 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: <200102261546.KAA19326@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 26, 2001 at 10:46:59AM -0500 References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> Message-ID: <20010226083217.A31643@glacier.fnational.com> On Mon, Feb 26, 2001 at 10:46:59AM -0500, Guido van Rossum wrote: > > Executive Summary: obmalloc will allow more efficient GC and we > > should try hard to get it into 2.1. > > Can you do it before the 2.1b1 release? We're planning that for this > Thursday, May 1st. Three days! What has to be done besides applying the patch and adding a configure option? I can do that tonight if you give the green light. > > Is the only issue with obmalloc treading? If so, what do we do to > > resolve this? > > 1. Yes, I think so. 2. It currently relies on the global interpreter > lock. That's why we want to make it an opt-in configuration option > (for now). Does that work with your proposed GC integration? Opt-in is fine for now. Neil From guido at digicool.com Mon Feb 26 17:45:48 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 26 Feb 2001 11:45:48 -0500 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: Your message of "Mon, 26 Feb 2001 08:32:17 PST." <20010226083217.A31643@glacier.fnational.com> References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> Message-ID: <200102261645.LAA19732@cj20424-a.reston1.va.home.com> > On Mon, Feb 26, 2001 at 10:46:59AM -0500, Guido van Rossum wrote: > > > Executive Summary: obmalloc will allow more efficient GC and we > > > should try hard to get it into 2.1. > > > > Can you do it before the 2.1b1 release? We're planning that for this > > Thursday, May 1st. Three days! > > What has to be done besides applying the patch and adding a > configure option? I can do that tonight if you give the green > light. Sure. Green light is on, modulo objections from Barry (who technically has this assigned -- but I believe he'd be happy to let you do the honors). I thought that I read in your mail that you were proposing changes first for better GC integration -- but I must've misread that. > > > Is the only issue with obmalloc treading? If so, what do we do to > > > resolve this? > > > > 1. Yes, I think so. 2. It currently relies on the global interpreter > > lock. That's why we want to make it an opt-in configuration option > > (for now). Does that work with your proposed GC integration? > > Opt-in is fine for now. OK. So what about the optional memory profiler, on Jeremy's plate? http://sourceforge.net/tracker/index.php?func=detail&aid=401229&group_id=5470&atid=305470 I'm sure Jeremy would also love it if someone else took care of this -- he's busy with the future_statement implementation. --Guido van Rossum (home page: http://www.python.org/~guido/) From thomas at xs4all.net Mon Feb 26 17:54:53 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Mon, 26 Feb 2001 17:54:53 +0100 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: <200102261546.KAA19326@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 26, 2001 at 10:46:59AM -0500 References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> Message-ID: <20010226175453.A9678@xs4all.nl> On Mon, Feb 26, 2001 at 10:46:59AM -0500, Guido van Rossum wrote: > > Executive Summary: obmalloc will allow more efficient GC and we > > should try hard to get it into 2.1. > Can you do it before the 2.1b1 release? We're planning that for this > Thursday, May 1st. Three days! The first May 1st that falls on a Thursday is in 2003 :) I believe Moshe and I both volunteer to do the checkin should Neil not get to it for some reason. -- Thomas Wouters 
                              
                              Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From barry at digicool.com Mon Feb 26 17:58:49 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Mon, 26 Feb 2001 11:58:49 -0500 Subject: [Python-Dev] GC and Vladimir's obmalloc References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> <200102261645.LAA19732@cj20424-a.reston1.va.home.com> Message-ID: <15002.35657.447162.975798@anthem.wooz.org> >>>>> "GvR" == Guido van Rossum 
                              
                              writes: GvR> Sure. Green light is on, modulo objections from Barry (who GvR> technically has this assigned -- but I believe he'd be happy GvR> to let you do the honors). No objections, and I've re-assigned the patch to Neil. At least I /think/ I have (modulo initial confusion caused by SF's new issue tracker UI :). green-means-go-ly y'rs, -Barry From mwh21 at cam.ac.uk Mon Feb 26 18:19:28 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 26 Feb 2001 17:19:28 +0000 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: Guido van Rossum's message of "Mon, 26 Feb 2001 11:45:48 -0500" References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> <200102261645.LAA19732@cj20424-a.reston1.va.home.com> Message-ID: 
                              
                              Guido van Rossum 
                              
                              writes: > So what about the optional memory profiler, on Jeremy's plate? > > http://sourceforge.net/tracker/index.php?func=detail&aid=401229&group_id=5470&atid=305470 > > I'm sure Jeremy would also love it if someone else took care of this > -- he's busy with the future_statement implementation. In a way, I think this is less important. IMO, only people with a fair amount of wizadry are going to want to use this, and telling them to go and get the patch and apply it isn't too much of a stretch (though it would help if it applied cleanly...). OTOH, obmalloc can improve performance (esp. if Neil can do his cool GC optimizations with it), and so it becomes more important to get it into 2.1 (as a prelude to turning it on by default in 2.2, right?). Just my opinion, M. -- This is the fixed point problem again; since all some implementors do is implement the compiler and libraries for compiler writing, the language becomes good at writing compilers and not much else! -- Brian Rogoff, comp.lang.functional From nas at arctrix.com Mon Feb 26 18:37:31 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 26 Feb 2001 09:37:31 -0800 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: <200102261645.LAA19732@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 26, 2001 at 11:45:48AM -0500 References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> <200102261645.LAA19732@cj20424-a.reston1.va.home.com> Message-ID: <20010226093731.A31918@glacier.fnational.com> On Mon, Feb 26, 2001 at 11:45:48AM -0500, Guido van Rossum wrote: > So what about the optional memory profiler, on Jeremy's plate? That's quite a bit lower priority in my opinion. People who need it could just apply it themselves. Also, I don't remember Vladimir saying he thought it was ready. Neil From nas at arctrix.com Mon Feb 26 18:43:26 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 26 Feb 2001 09:43:26 -0800 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: <15002.35657.447162.975798@anthem.wooz.org>; from barry@digicool.com on Mon, Feb 26, 2001 at 11:58:49AM -0500 References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> <200102261645.LAA19732@cj20424-a.reston1.va.home.com> <15002.35657.447162.975798@anthem.wooz.org> Message-ID: <20010226094326.B31918@glacier.fnational.com> On Mon, Feb 26, 2001 at 11:58:49AM -0500, Barry A. Warsaw wrote: > No objections, and I've re-assigned the patch to Neil. At least I > /think/ I have (modulo initial confusion caused by SF's new issue > tracker UI :). It worked. The new tracker looks pretty cool. I like that fact that patches show up on the personalized page as well as bugs. Neil From barry at digicool.com Mon Feb 26 18:46:31 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Mon, 26 Feb 2001 12:46:31 -0500 Subject: [Python-Dev] GC and Vladimir's obmalloc References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> <200102261645.LAA19732@cj20424-a.reston1.va.home.com> <15002.35657.447162.975798@anthem.wooz.org> <20010226094326.B31918@glacier.fnational.com> Message-ID: <15002.38519.223964.124773@anthem.wooz.org> >>>>> "NS" == Neil Schemenauer 
                              
                              writes: NS> It worked. The new tracker looks pretty cool. I like that NS> fact that patches show up on the personalized page as well as NS> bugs. One problem: they need to re-establish the lexical sort of `assignees' by user id. From barry at digicool.com Mon Feb 26 18:57:09 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Mon, 26 Feb 2001 12:57:09 -0500 Subject: [Python-Dev] RE: Update to PEP 232 References: <14994.53768.767065.272158@anthem.wooz.org> <000901c09bed$f861d750$f05aa8c0@lslp7o.int.lsl.co.uk> Message-ID: <15002.39157.936988.699980@anthem.wooz.org> >>>>> "TJI" == Tony J Ibbs 
                              
                              writes: TJI> 1. Clarify the final statement - I seem to have the TJI> impression (sorry, can't find a message to back it up) that TJI> either the BDFL or Tim Peters is very against anything other TJI> than the "simple" #f.a = 1# sort of thing - unless I'm TJI> mischannelling (?) again. From pedroni at inf.ethz.ch Mon Feb 26 19:44:23 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Mon, 26 Feb 2001 19:44:23 +0100 (MET) Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) Message-ID: <200102261844.TAA09406@core.inf.ethz.ch> Hi. I have understood the point about making future feature inheritance automatic ;) So I imagine that the future features should at least end up being visible as a (writeable?) code attribute: co_futures or co_future_features being a list of feature name strings. or I'm wrong? regards, Samuele Pedroni From tim.one at home.com Mon Feb 26 20:02:42 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 26 Feb 2001 14:02:42 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: <200102261844.TAA09406@core.inf.ethz.ch> Message-ID: 
                              
                              [Samuele Pedroni] > I have understood the point about making future feature inheritance > automatic ;) > > So I imagine that the future features should at least end up being > visible as a (writeable?) code attribute: > > co_futures or co_future_features > > being a list of feature name strings. > > or I'm wrong? I don't know. Toward what end? I expect that for beta1, none of the automagic inheritance stuff will actually get implemented, and we're off to the Python conference next week, so there's time to flesh out what the next step *should* be. From skip at mojam.com Mon Feb 26 21:30:58 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 26 Feb 2001 14:30:58 -0600 (CST) Subject: [Python-Dev] editing FAQ? Message-ID: <15002.48386.689975.913306@beluga.mojam.com> Seems like maybe the FAQ needs some touchup. Is it still under the control of the FAQ wizard (what's the password)? If not, is it in CVS somewhere? Skip From tim.one at home.com Mon Feb 26 21:34:27 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 26 Feb 2001 15:34:27 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: <15002.48386.689975.913306@beluga.mojam.com> Message-ID: 
                              
                              [Skip Montanaro] > Seems like maybe the FAQ needs some touchup. Is it still under > the control of the FAQ wizard (what's the password)? The password is Spam case-sensitive-ly y'rs - tim From Greg.Wilson at baltimore.com Tue Feb 27 00:23:51 2001 From: Greg.Wilson at baltimore.com (Greg Wilson) Date: Mon, 26 Feb 2001 18:23:51 -0500 Subject: [Python-Dev] first correct explanation wins a beer... Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1ABF07@nsamcanms1.ca.baltimore.com> ...or the caffeinated beverage of your choice, collectable at IPC9. I'm running on a straightforward Linux box: $ uname -a Linux akbar.nevex.com 2.2.16 #3 Mon Aug 14 14:43:46 EDT 2000 i686 unknown with Python 2.1, built fresh from today's repo: $ python Python 2.1a2 (#2, Feb 26 2001, 15:27:11) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 I have one tiny script called "tryout.py": $ cat tryout.py print "We made it!" and a small HTML file called "printing.html": $ cat printing.html 
                              
We made it!
The idea is that my little SAX handler will look for "pre" elements with "prog" attributes, re-run the appropriate script, and compare the output with what's in the HTML page (it's an example for the class). The problem is that "popen2" doesn't work as expected when called from within a SAX content handler, even though it works just fine when called from a method of another class, or on its own. The whole script is: $ cat repy #!/usr/bin/env python import sys from os import popen2 from xml.sax import parse, ContentHandler class JustAClass: def method(self, progName): shellCmd = "python " + progName print "using just a class, shell command is '" + shellCmd + "'" inp, outp = popen2(shellCmd) inp.close() print "using just a class, result is", outp.readlines() class UsingSax(ContentHandler): def startElement(self, name, attrs): if name == "pre": shellCmd = "python " + attrs["prog"] print "using SAX, shell command is '" + shellCmd + "'" inp, outp = popen2(shellCmd) inp.close() print "using SAX, result is", outp.readlines() if __name__ == "__main__": # Run it directly inp, outp = popen2("python tryout.py") inp.close() print "Running popen2 directly, result is", outp.readlines() # Use a plain old class JustAClass().method("tryout.py") # Using SAX input = open("printing.html", 'r') parse(input, UsingSax()) input.close() The output is: $ python repy Running popen2 directly, result is ['We made it!\n'] using just a class, shell command is 'python tryout.py' using just a class, result is ['We made it!\n'] using SAX, shell command is 'python tryout.py' using SAX, result is [] My system has a stock 1.5.2 in /usr/bin/python, but my path is: $ echo $PATH /home/gvwilson/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/sbin:/ho me/gnats/bin so that I get the 2.1 version: $ which python /home/gvwilson/bin/python My PYTHONPATH is set up properly as well (I think): $ echo $PYTHONPATH /home/gvwilson/lib/python2.1:/home/gvwilson/lib/python2.1/lib-dynload I'm using PyXML-0.6.4, built fresh from the .tar.gz source today. So, like I said --- a beer or coffee to the first person who can explain what's up. I'm attaching the Python scripts, the HTML file, and a verbose strace output from my machine. Thanks, Greg < > < > < > < > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: repy Type: application/octet-stream Size: 1068 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: strace.txt URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tryout.py Type: application/octet-stream Size: 20 bytes Desc: not available URL: From paulp at ActiveState.com Tue Feb 27 00:42:38 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Mon, 26 Feb 2001 15:42:38 -0800 Subject: [Python-Dev] first correct explanation wins a beer... References: <930BBCA4CEBBD411BE6500508BB3328F1ABF07@nsamcanms1.ca.baltimore.com> Message-ID: <3A9AE9EE.EBB27F89@ActiveState.com> My guess: Unicode. Try casting to an 8-bit string and see what happens. -- Vote for Your Favorite Python & Perl Programming Accomplishments in the first Active Awards! http://www.ActiveState.com/Awards From tim.one at home.com Tue Feb 27 02:18:37 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 26 Feb 2001 20:18:37 -0500 Subject: [Python-Dev] PEP 236: Back to the __future__ Message-ID: The text of this PEP can also be found online, at: http://python.sourceforge.net/peps/pep-0236.html PEP: 236 Title: Back to the __future__ Version: $Revision: 1.2 $ Author: Tim Peters Python-Version: 2.1 Status: Active Type: Standards Track Created: 26-Feb-2001 Post-History: 26-Feb-2001 Motivation From time to time, Python makes an incompatible change to the advertised semantics of core language constructs, or changes their accidental (implementation-dependent) behavior in some way. While this is never done capriciously, and is always done with the aim of improving the language over the long term, over the short term it's contentious and disrupting. The "Guidelines for Language Evolution" PEP [1] suggests ways to ease the pain, and this PEP introduces some machinery in support of that. The "Statically Nested Scopes" PEP [2] is the first application, and will be used as an example here. Intent [Note: This is policy, and so should eventually move into PEP 5[1]] When an incompatible change to core language syntax or semantics is being made: 1. The release C that introduces the change does not change the syntax or semantics by default. 2. A future release R is identified in which the new syntax or semantics will be enforced. 3. The mechanisms described in the "Warning Framework" PEP [3] are used to generate warnings, whenever possible, about constructs or operations whose meaning may[4] change in release R. 4. The new future_statement (see below) can be explicitly included in a module M to request that the code in module M use the new syntax or semantics in the current release C. So old code continues to work by default, for at least one release, although it may start to generate new warning messages. Migration to the new syntax or semantics can proceed during that time, using the future_statement to make modules containing it act as if the new syntax or semantics were already being enforced. Note that there is no need to involve the future_statement machinery in new features unless they can break existing code; fully backward- compatible additions can-- and should --be introduced without a corresponding future_statement. Syntax A future_statement is simply a from/import statement using the reserved module name __future__: future_statement: "from" "__future__" "import" feature ["as" name] ("," feature ["as" name])* feature: identifier name: identifier In addition, all future_statments must appear near the top of the module. The only lines that can appear before a future_statement are: + The module docstring (if any). + Comments. + Blank lines. + Other future_statements. Example: """This is a module docstring.""" # This is a comment, preceded by a blank line and followed by # a future_statement. from __future__ import nested_scopes from math import sin from __future__ import alabaster_weenoblobs # compile-time error! # That was an error because preceded by a non-future_statement. Semantics A future_statement is recognized and treated specially at compile time: changes to the semantics of core constructs are often implemented by generating different code. It may even be the case that a new feature introduces new incompatible syntax (such as a new reserved word), in which case the compiler may need to parse the module differently. Such decisions cannot be pushed off until runtime. For any given release, the compiler knows which feature names have been defined, and raises a compile-time error if a future_statement contains a feature not known to it[5]. The direct runtime semantics are the same as for any import statement: there is a standard module __future__.py, described later, and it will be imported in the usual way at the time the future_statement is executed. The *interesting* runtime semantics depend on the specific feature(s) "imported" by the future_statement(s) appearing in the module. Note that there is nothing special about the statement: import __future__ [as name] That is not a future_statement; it's an ordinary import statement, with no special semantics or syntax restrictions. Example Consider this code, in file scope.py: x = 42 def f(): x = 666 def g(): print "x is", x g() f() Under 2.0, it prints: x is 42 Nested scopes[2] are being introduced in 2.1. But under 2.1, it still prints x is 42 and also generates a warning. In 2.2, and also in 2.1 *if* "from __future__ import nested_scopes" is included at the top of scope.py, it prints x is 666 Standard Module __future__.py Lib/__future__.py is a real module, and serves three purposes: 1. To avoid confusing existing tools that analyze import statements and expect to find the modules they're importing. 2. To ensure that future_statements run under releases prior to 2.1 at least yield runtime exceptions (the import of __future__ will fail, because there was no module of that name prior to 2.1). 3. To document when incompatible changes were introduced, and when they will be-- or were --made mandatory. This is a form of executable documentation, and can be inspected programatically via importing __future__ and examining its contents. Each statment in __future__.py is of the form: FeatureName = ReleaseInfo ReleaseInfo is a pair of the form: (OptionalRelease, MandatoryRelease) where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples of the same form as sys.version_info: (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int PY_MINOR_VERSION, # the 1; an int PY_MICRO_VERSION, # the 0; an int PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string PY_RELEASE_SERIAL # the 3; an int ) OptionalRelease records the first release in which from __future__ import FeatureName was accepted. In the case of MandatoryReleases that have not yet occurred, MandatoryRelease predicts the release in which the feature will become part of the language. Else MandatoryRelease records when the feature became part of the language; in releases at or after that, modules no longer need from __future__ import FeatureName to use the feature in question, but may continue to use such imports. MandatoryRelease may also be None, meaning that a planned feature got dropped. No line will ever be deleted from __future__.py. Example line: nested_scopes = (2, 1, 0, "beta", 1), (2, 2, 0, "final", 0) This means that from __future__ import nested_scopes will work in all releases at or after 2.1b1, and that nested_scopes are intended to be enforced starting in release 2.2. Unresolved Problems: Runtime Compilation Several Python features can compile code during a module's runtime: 1. The exec statement. 2. The execfile() function. 3. The compile() function. 4. The eval() function. 5. The input() function. Since a module M containing a future_statement naming feature F explicitly requests that the current release act like a future release with respect to F, any code compiled dynamically from text passed to one of these from within M should probably also use the new syntax or semantics associated with F. This isn't always desired, though. For example, doctest.testmod(M) compiles examples taken from strings in M, and those examples should use M's choices, not necessarily doctest module's choices. It's unclear what to do about this. The initial release (2.1b1) is likely to ignore these issues, saying that each dynamic compilation starts over from scratch (i.e., as if no future_statements had been specified). In any case, a future_statement appearing "near the top" (see Syntax above) of text compiled dynamically by an exec, execfile() or compile() applies to the code block generated, but has no further effect on the module that executes such an exec, execfile() or compile(). This can't be used to affect eval() or input(), however, because they only allow expression input, and a future_statement is not an expression. Unresolved Problems: Interactive Shells An interactive shell can be seen as an extreme case of runtime compilation (see above): in effect, each statement typed at an interactive shell prompt runs a new instance of exec, compile() or execfile(). The initial release (2.1b1) is likely to be such that future_statements typed at an interactive shell have no effect beyond their runtime meaning as ordinary import statements. It would make more sense if a future_statement typed at an interactive shell applied to the rest of the shell session's life, as if the future_statement had appeared at the top of a module. Again, it's unclear what to do about this. Questions and Answers Q: What about a "from __past__" version, to get back *old* behavior? A: Outside the scope of this PEP. Seems unlikely to the author, though. Write a PEP if you want to pursue it. Q: What about incompatibilites due to changes in the Python virtual machine? A: Outside the scope of this PEP, although PEP 5[1] suggests a grace period there too, and the future_statement may also have a role to play there. Q: What about incompatibilites due to changes in Python's C API? A: Outside the scope of this PEP. Q: I want to wrap future_statements in try/except blocks, so I can use different code depending on which version of Python I'm running. Why can't I? A: Sorry! try/except is a runtime feature; future_statements are primarily compile-time gimmicks, and your try/except happens long after the compiler is done. That is, by the time you do try/except, the semantics in effect for the module are already a done deal. Since the try/except wouldn't accomplish what it *looks* like it should accomplish, it's simply not allowed. We also want to keep these special statements very easy to find and to recognize. Note that you *can* import __future__ directly, and use the information in it, along with sys.version_info, to figure out where the release you're running under stands in relation to a given feature's status. Q: Going back to the nested_scopes example, what if release 2.2 comes along and I still haven't changed my code? How can I keep the 2.1 behavior then? A: By continuing to use 2.1, and not moving to 2.2 until you do change your code. The purpose of future_statement is to make life easier for people who keep keep current with the latest release in a timely fashion. We don't hate you if you don't, but your problems are much harder to solve, and somebody with those problems will need to write a PEP addressing them. future_statement is aimed at a different audience. Copyright This document has been placed in the public domain. References and Footnotes [1] http://python.sourceforge.net/peps/pep-0005.html [2] http://python.sourceforge.net/peps/pep-0227.html [3] http://python.sourceforge.net/peps/pep-0230.html [4] Note that this is "may" and not "will": better safe than sorry. Of course spurious warnings won't be generated when avoidable with reasonable cost. [5] This ensures that a future_statement run under a release prior to the first one in which a given feature is known (but >= 2.1) will raise a compile-time error rather than silently do a wrong thing. If transported to a release prior to 2.1, a runtime error will be raised because of the failure to import __future__ (no such module existed in the standard distribution before the 2.1 release, and the double underscores make it a reserved name). Local Variables: mode: indented-text indent-tabs-mode: nil End: From martin at loewis.home.cs.tu-berlin.de Tue Feb 27 07:52:27 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Tue, 27 Feb 2001 07:52:27 +0100 Subject: [Python-Dev] first correct explanation wins a beer... Message-ID: <200102270652.f1R6qRA00896@mira.informatik.hu-berlin.de> > My guess: Unicode. Try casting to an 8-bit string and see what happens. Paul is right, so I guess you owe him a beer... To see this in more detail, compare popen2.Popen3("/bin/ls").fromchild.readlines() to popen2.Popen3(u"/bin/ls").fromchild.readlines() Specifically, it seems the problem is def _run_child(self, cmd): if type(cmd) == type(''): cmd = ['/bin/sh', '-c', cmd] in popen2. I still think there should be types.isstring function, and then this fragment should read def _run_child(self, cmd): if types.isstring(cmd): cmd = ['/bin/sh', '-c', cmd] Now, if somebody would put "funny characters" into cmd, it would still give an error, which is then almost silently ignored, due to the try: os.execvp(cmd[0], cmd) finally: os._exit(1) fragment. Perhaps it would be better to put if type(cmd) == types.UnicodeType: cmd = cmd.encode("ascii") into Popen3.__init__, so you'd get an error if you pass those funny characters. Regards, Martin From ping at lfw.org Tue Feb 27 08:52:28 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Mon, 26 Feb 2001 23:52:28 -0800 (PST) Subject: [Python-Dev] pydoc for 2.1b1? Message-ID: Hi! It's my birthday today, and i think it would be a really awesome present if pydoc.py were to be accepted into the distribution. :) (Not just because it's my birthday, though. I would hope it is worth accepting based on its own merits.) The most recent version of pydoc is just a single file, for the easiest possible setup -- zero installation effort. You only need the "inspect" module to run it. You can find it under the CVS tree at nondist/sandbox/help/pydoc.py or download it from http://www.lfw.org/python/pydoc.py http://www.lfw.org/python/inspect.py Among other things, it now handles a few corner cases better, the formatting is a bit improved, and you can now tell it to write out the documentation to files on disk if that's your fancy (it can still display the documentation interactively in your shell or your web browser). -- ?!ng From ping at lfw.org Tue Feb 27 12:53:08 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 27 Feb 2001 03:53:08 -0800 (PST) Subject: [Python-Dev] A few small issues Message-ID: Hi. Here are some things i noticed tonight. 1. The error message for UnboundLocalError isn't really accurate. >>> def f(): ... x = 1 ... del x ... print x ... >>> f() Traceback (most recent call last): File " ", line 1, in ? File " ", line 4, in f UnboundLocalError: local variable 'x' referenced before assignment >>> It's not a question of the variable being referenced "before assignment" -- it's just that the variable is undefined. Better would be a straightforward message such as UnboundLocalError: local name 'x' is not defined This message would be consistent with the others: NameError: name 'x' is not defined NameError: global name 'x' is not defined 2. Why does imp.find_module('') succeed? >>> import imp >>> imp.find_module('') (None, '/home/ping/python/', ('', '', 5)) I think it should fail with "empty module name" or something similar. 3. Normally when a script is run, it looks like '' gets prepended to sys.path so that the current directory will be searched. But if the script being run is a symlink, the symlink is resolved first to an actual file, and the directory containing that file is prepended to sys.path. This leads to strange behaviour: localhost[1004]% cat > spam.py bacon = 5 localhost[1005]% cat > /tmp/eggs.py import spam localhost[1006]% ln -s /tmp/eggs.py . localhost[1007]% python eggs.py Traceback (most recent call last): File "eggs.py", line 1, in ? import spam ImportError: No module named spam localhost[1008]% python Python 2.1a2 (#23, Feb 11 2001, 16:26:17) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import spam >>> (whereupon the confused programmer says, "Huh? If *i* could import spam, why couldn't eggs?"). Was this a design decision? Should it be changed to always prepend ''? 4. As far as i can tell, the curses.wrapper package is inaccessible. It's obscured by a curses.wrapper() function in the curses package. >>> import curses.wrapper >>> curses.wrapper >>> import sys >>> sys.modules['curses.wrapper'] I don't see any way around this other than renaming curses.wrapper. -- ?!ng "If I have not seen as far as others, it is because giants were standing on my shoulders." -- Hal Abelson From thomas at xs4all.net Tue Feb 27 14:10:20 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 27 Feb 2001 14:10:20 +0100 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: ; from ping@lfw.org on Mon, Feb 26, 2001 at 11:52:28PM -0800 References: Message-ID: <20010227141020.B9678@xs4all.nl> On Mon, Feb 26, 2001 at 11:52:28PM -0800, Ka-Ping Yee wrote: > It's my birthday today, and i think it would be a really awesome > present if pydoc.py were to be accepted into the distribution. :) It has my vote ;) I think pydoc serves two purposes: it's a useful tool, especially if we can get it accepted by the larger community (get it mentioned on python-list by non-dev'ers, get it mentioned in books, etc.) And it serves as a great example on how to do things like introspection. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From guido at digicool.com Tue Feb 27 03:08:36 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 26 Feb 2001 21:08:36 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: Your message of "Mon, 26 Feb 2001 23:52:28 PST." References: Message-ID: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> > It's my birthday today, and i think it would be a really awesome > present if pydoc.py were to be accepted into the distribution. :) Congratulations, Ping. > (Not just because it's my birthday, though. I would hope it is > worth accepting based on its own merits.) No, it's being accepted because your name is Ping. I just read the first few pages of the script for Monty Python's Meaning of Life, which figures a "machine that goes 'Ping'". That makes your name sufficiently Pythonic. > The most recent version of pydoc is just a single file, for the > easiest possible setup -- zero installation effort. You only need > the "inspect" module to run it. You can find it under the CVS tree > at nondist/sandbox/help/pydoc.py or download it from > > http://www.lfw.org/python/pydoc.py > http://www.lfw.org/python/inspect.py > > Among other things, it now handles a few corner cases better, the > formatting is a bit improved, and you can now tell it to write out > the documentation to files on disk if that's your fancy (it can > still display the documentation interactively in your shell or your > web browser). You can check these into the regular tree. I guess they both go into the Lib directory, right? Make sure pydoc.py is checked in with +x permissions. I'll see if we can import pydoc.help into __builtin__ in interactive mode. Now let's paaaartaaaay! --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin at mems-exchange.org Tue Feb 27 16:02:28 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 10:02:28 -0500 Subject: [Python-Dev] A few small issues In-Reply-To: ; from ping@lfw.org on Tue, Feb 27, 2001 at 03:53:08AM -0800 References: Message-ID: <20010227100228.A17362@ute.cnri.reston.va.us> On Tue, Feb 27, 2001 at 03:53:08AM -0800, Ka-Ping Yee wrote: >4. As far as i can tell, the curses.wrapper package is inaccessible. > It's obscured by a curses.wrapper() function in the curses package. The function in the packages results from 'from curses.wrapper import wrapper', so there's really no need to import curses.wrapper directly. Hmmm... but the module is documented in the library reference. I could move the definition of wrapper() into the __init__.py and change the docs, if that's desired. --amk From skip at mojam.com Tue Feb 27 16:48:14 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 09:48:14 -0600 (CST) Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <20010227141020.B9678@xs4all.nl> References: <20010227141020.B9678@xs4all.nl> Message-ID: <15003.52286.800752.317549@beluga.mojam.com> Thomas> [pydoc] has my vote ;) Mine too. S From akuchlin at mems-exchange.org Tue Feb 27 16:59:32 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 10:59:32 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <200102270208.VAA01410@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 26, 2001 at 09:08:36PM -0500 References: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> Message-ID: <20010227105932.C17362@ute.cnri.reston.va.us> On Mon, Feb 26, 2001 at 09:08:36PM -0500, Guido van Rossum wrote: >You can check these into the regular tree. I guess they both go into >the Lib directory, right? Make sure pydoc.py is checked in with +x >permissions. I'll see if we can import pydoc.help into __builtin__ in >interactive mode. What about installing it as a script, into /bin, so it's also available at the command line? The setup.py script could do it, or the Makefile could handle it. --amk From skip at mojam.com Tue Feb 27 17:00:12 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 10:00:12 -0600 (CST) Subject: [Python-Dev] editing FAQ? In-Reply-To: References: <15002.48386.689975.913306@beluga.mojam.com> Message-ID: <15003.53004.840361.997254@beluga.mojam.com> Tim> [Skip Montanaro] >> Seems like maybe the FAQ needs some touchup. Is it still under the >> control of the FAQ wizard (what's the password)? Tim> The password is Tim> Spam Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I barking up the wrong virtual tree? Skip From tim.one at home.com Tue Feb 27 17:23:23 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 11:23:23 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: <15003.53004.840361.997254@beluga.mojam.com> Message-ID: [Skip Montanaro] > Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I barking up > the wrong virtual tree? Should work; agree it doesn't; have reported it to webmaster. From tim.one at home.com Tue Feb 27 17:46:21 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 11:46:21 -0500 Subject: [Python-Dev] A few small issues In-Reply-To: Message-ID: [Ka-Ping Yee] > Hi. Here are some things i noticed tonight. Ping (& everyone else), please submit bugs on SourceForge. Python-Dev is a black hole for this kind of thing: if nobody addresses your reports RIGHT NOW (unlikely in a release week), they'll be lost forever. From guido at digicool.com Tue Feb 27 06:04:28 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:04:28 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: Your message of "Tue, 27 Feb 2001 10:59:32 EST." <20010227105932.C17362@ute.cnri.reston.va.us> References: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> <20010227105932.C17362@ute.cnri.reston.va.us> Message-ID: <200102270504.AAA02105@cj20424-a.reston1.va.home.com> > On Mon, Feb 26, 2001 at 09:08:36PM -0500, Guido van Rossum wrote: > >You can check these into the regular tree. I guess they both go into > >the Lib directory, right? Make sure pydoc.py is checked in with +x > >permissions. I'll see if we can import pydoc.help into __builtin__ in > >interactive mode. > > What about installing it as a script, into /bin, so it's also > available at the command line? The setup.py script could do it, or > the Makefile could handle it. Sounds like a good idea. (Maybe idle can also be installed if Tk is found.) Go for it. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Tue Feb 27 06:05:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:05:03 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: Your message of "Tue, 27 Feb 2001 10:00:12 CST." <15003.53004.840361.997254@beluga.mojam.com> References: <15002.48386.689975.913306@beluga.mojam.com> <15003.53004.840361.997254@beluga.mojam.com> Message-ID: <200102270505.AAA02119@cj20424-a.reston1.va.home.com> > Tim> The password is > > Tim> Spam > > Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I barking up > the wrong virtual tree? Try again. I've rebooted the server. --Guido van Rossum (home page: http://www.python.org/~guido/) From skip at mojam.com Tue Feb 27 18:10:43 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 11:10:43 -0600 (CST) Subject: [Python-Dev] The more I think about __all__ ... Message-ID: <15003.57235.144454.826610@beluga.mojam.com> ... the more I think I should just yank out all those definitions. I've already been bitten by an incomplete __all__ list. I think the only people who can realistically create them are the authors of the modules. In addition, maintaining them is going to be as difficult as keeping any other piece of documentation up-to-date. Any other thoughts? BDFL - would you care to pronounce? Skip From skip at mojam.com Tue Feb 27 18:19:23 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 11:19:23 -0600 (CST) Subject: [Python-Dev] editing FAQ? In-Reply-To: <200102270505.AAA02119@cj20424-a.reston1.va.home.com> References: <15002.48386.689975.913306@beluga.mojam.com> <15003.53004.840361.997254@beluga.mojam.com> <200102270505.AAA02119@cj20424-a.reston1.va.home.com> Message-ID: <15003.57755.361084.441490@beluga.mojam.com> >> Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I >> barking up the wrong virtual tree? Guido> Try again. I've rebooted the server. Okay, progress has been made. The above URL yielded a 404 error. Obviously I guessed the wrong URL for the faqwiz interface. I did eventually find it at http://www.python.org/cgi-bin/faqw.py Thanks, Skip From guido at digicool.com Tue Feb 27 06:31:02 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:31:02 -0500 Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: Your message of "Tue, 27 Feb 2001 11:10:43 CST." <15003.57235.144454.826610@beluga.mojam.com> References: <15003.57235.144454.826610@beluga.mojam.com> Message-ID: <200102270531.AAA02301@cj20424-a.reston1.va.home.com> > ... the more I think I should just yank out all those definitions. I've > already been bitten by an incomplete __all__ list. I think the only people > who can realistically create them are the authors of the modules. In > addition, maintaining them is going to be as difficult as keeping any other > piece of documentation up-to-date. > > Any other thoughts? BDFL - would you care to pronounce? I've always been lukewarm about the desire to add __all__ to every module under the sun. But i'm also lukewarm about ripping it all out now that it's done. So, no pronouncement from me unless I get more feedback on how harmful it's been so far. Sorry... --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Tue Feb 27 18:26:34 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 12:26:34 -0500 (EST) Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: <200102270531.AAA02301@cj20424-a.reston1.va.home.com> References: <15003.57235.144454.826610@beluga.mojam.com> <200102270531.AAA02301@cj20424-a.reston1.va.home.com> Message-ID: <15003.58186.586724.972984@w221.z064000254.bwi-md.dsl.cnc.net> It seems to be to be a compatibility issue. If a module has an __all__, then from module import * may behave differently in Python 2.1 than it did in Python 2.0. The only problem of this sort I have encountered is with pickle, but I seldom use import *. The problem ends up being obscure to debug because you get a NameError. Then you hunt around in the middle and see that the name is never bound. Then you see that there is an import * -- and hopefully only one! Then you think, "Didn't Python grow __all__ enforcement in 2.1?" And you start hunting around for that name in the import module and check to see if it's in __all__. Jeremy From guido at digicool.com Tue Feb 27 06:48:05 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:48:05 -0500 Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: Your message of "Tue, 27 Feb 2001 12:26:34 EST." <15003.58186.586724.972984@w221.z064000254.bwi-md.dsl.cnc.net> References: <15003.57235.144454.826610@beluga.mojam.com> <200102270531.AAA02301@cj20424-a.reston1.va.home.com> <15003.58186.586724.972984@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102270548.AAA02442@cj20424-a.reston1.va.home.com> > It seems to be to be a compatibility issue. If a module has an > __all__, then from module import * may behave differently in Python > 2.1 than it did in Python 2.0. The only problem of this sort I have > encountered is with pickle, but I seldom use import *. This suggests a compatibility test that Skip can easily write. For each module that has an __all__ in 2.1, invoke python 2.0 to see what names are imported by import * for that module in 2.0, and see if there are differences. Then look carefully at the differences and see if they are acceptable. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Tue Feb 27 19:56:24 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 13:56:24 -0500 Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: <200102270531.AAA02301@cj20424-a.reston1.va.home.com> Message-ID: [Guido van Rossum] > ... > So, no pronouncement from me unless I get more feedback on how harmful > it's been so far. Sorry... Doesn't matter much to me. There are still spurious regrtest.py failures due to it under Windows when using -r; this has to do with that importing modules that don't exist on Windows leave behind incomplete module objects that fool test___all__.py. E.g., "regrtest test_pty test___all__" on Windows yields a bizarre failure. Tried fixing that last night, but it somehow caused test_sre(!) to fail instead, and I gave up for the night. From tim.one at home.com Tue Feb 27 20:27:12 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 14:27:12 -0500 Subject: [Python-Dev] Case-sensitive import Message-ID: I'm still trying to sort this out. Some concerns and questions: I don't like the new MatchFilename, because it triggers on *all* platforms that #define HAVE_DIRENT_H. Anyone, doesn't that trigger on straight Linux systems too (all I know is that it's part of the Single UNIX Specification)? I don't like it because it implements a woefully inefficient algorithm: it cycles through the entire directory looking for a case-sensitive match. But there can be hundreds of .py files in a directory, and on average it will need to look at half of them, while if this triggers on straight Linux there's no need to look at *any* of them there. I also don't like it because it apparently triggers on Cygwin too but the code that calls it doesn't cater to that Cygwin possibly *should* be defining ALTSEP as well as SEP. Would rather dump MatchFilename and rewrite in terms of the old check_case (which should run much quicker, and already comes in several appropriate platform-aware versions -- and I clearly minimize the chance of breakage if I stick to that time-tested code). Steven, there is a "#ifdef macintosh" version of check_case already. Will that or won't that work correctly on your variant of Mac? If not, would you please supply a version that does (along with the #ifdef'ery needed to recognize your Mac variant)? Jason, I *assume* that the existing "#if defined(MS_WIN32) || defined(__CYGWIN__)" version of check_case works already for you. Scream if that's wrong. Steven and Jack, does getenv() work on both your flavors of Mac? I want to make PYTHONCASEOK work for you too. From tim.one at home.com Tue Feb 27 20:34:28 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 14:34:28 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: Message-ID: http://www.python.org/cgi-bin/faqw.py is working again. Password is Spam. The http://www.python.org/cgi-bin/faqwiz you mentioned now yields a 404 (File Not Found). > [Skip Montanaro] >> Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I >> barking up the wrong virtual tree? > > Should work; agree it doesn't; have reported it to webmaster. > From akuchlin at mems-exchange.org Tue Feb 27 20:50:44 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 14:50:44 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <200102270504.AAA02105@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Tue, Feb 27, 2001 at 12:04:28AM -0500 References: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> <20010227105932.C17362@ute.cnri.reston.va.us> <200102270504.AAA02105@cj20424-a.reston1.va.home.com> Message-ID: <20010227145044.B29979@ute.cnri.reston.va.us> On Tue, Feb 27, 2001 at 12:04:28AM -0500, Guido van Rossum wrote: >Sounds like a good idea. (Maybe idle can also be installed if Tk is >found.) Go for it. Will do. Is there anything else in Tools/ or Lib/ that could be usefully installed, such as tabnanny or whatever? I can't think of anything that would be really burningly important, so I'll just take care of pydoc. Re: IDLE: Martin already contributed a Tools/idle/setup.py, but I'm not sure how to trigger it recursively. Perhaps a configure option --install-idle, which controls an idleinstall target in the Makefile. Making it only install if Tkinter is compiled seems icky; I don't see how to do that cleanly. Martin, any suggestions? --amk From guido at digicool.com Tue Feb 27 09:08:13 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 03:08:13 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: Your message of "Tue, 27 Feb 2001 14:50:44 EST." <20010227145044.B29979@ute.cnri.reston.va.us> References: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> <20010227105932.C17362@ute.cnri.reston.va.us> <200102270504.AAA02105@cj20424-a.reston1.va.home.com> <20010227145044.B29979@ute.cnri.reston.va.us> Message-ID: <200102270808.DAA16485@cj20424-a.reston1.va.home.com> > On Tue, Feb 27, 2001 at 12:04:28AM -0500, Guido van Rossum wrote: > >Sounds like a good idea. (Maybe idle can also be installed if Tk is > >found.) Go for it. > > Will do. Is there anything else in Tools/ or Lib/ that could be > usefully installed, such as tabnanny or whatever? I can't think of > anything that would be really burningly important, so I'll just take > care of pydoc. Offhand, not -- idle and pydoc seem to be overwhelmingly more important to me than anything else... > Re: IDLE: Martin already contributed a Tools/idle/setup.py, but I'm > not sure how to trigger it recursively. Perhaps a configure option > --install-idle, which controls an idleinstall target in the Makefile. > Making it only install if Tkinter is compiled seems icky; I don't see > how to do that cleanly. Martin, any suggestions? I have to admit that I don't know what IDLE's setup.py does... :-( --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin at mems-exchange.org Tue Feb 27 21:55:45 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 15:55:45 -0500 Subject: [Python-Dev] Patch uploads broken Message-ID: Uploading of patches seems to be broken on SourceForge at the moment; even if you fill in the file upload form, its contents seem to just be ignored. Reported to SF as support req #404688: http://sourceforge.net/tracker/?func=detail&aid=404688&group_id=1&atid=200001 --amk From tim.one at home.com Tue Feb 27 22:15:53 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 16:15:53 -0500 Subject: [Python-Dev] New test_inspect fails under -O Message-ID: I assume this is a x-platform failure. Don't have time to look into it myself right now. C:\Code\python\dist\src\PCbuild>python -O ../lib/test/test_inspect.py Traceback (most recent call last): File "../lib/test/test_inspect.py", line 172, in ? 'trace() row 1') File "../lib/test/test_inspect.py", line 70, in test raise TestFailed, message % args test_support.TestFailed: trace() row 1 C:\Code\python\dist\src\PCbuild> From jeremy at alum.mit.edu Tue Feb 27 22:38:27 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 16:38:27 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... Message-ID: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> > In addition, all future_statments must appear near the top of the > module. The only lines that can appear before a future_statement are: > > + The module docstring (if any). > + Comments. > + Blank lines. > + Other future_statements. I would like to add another restriction: A future_statement must appear on a line by itself. It is not legal to combine a future_statement without any other statement using a semicolon. It would be a bear to implement error handling for cases like this: from __future__ import a; import b; from __future__ import c Jeremy From pedroni at inf.ethz.ch Tue Feb 27 22:54:43 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Tue, 27 Feb 2001 22:54:43 +0100 (MET) Subject: [Python-Dev] one more restriction for from __future__ import ... Message-ID: <200102272154.WAA25543@core.inf.ethz.ch> Hi. > > In addition, all future_statments must appear near the top of the > > module. The only lines that can appear before a future_statement are: > > > > + The module docstring (if any). > > + Comments. > > + Blank lines. > > + Other future_statements. > > I would like to add another restriction: > > A future_statement must appear on a line by itself. It is not > legal to combine a future_statement without any other statement > using a semicolon. > > It would be a bear to implement error handling for cases like this: > > from __future__ import a; import b; from __future__ import c Will the error be unclear for the user or there's another problem? In jython I get from parser an abstract syntax tree, so it is difficult to distringuish the ; from true newlines ;) regards, Samuele From guido at digicool.com Tue Feb 27 11:06:18 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 05:06:18 -0500 Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: Your message of "Tue, 27 Feb 2001 16:38:27 EST." <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102271006.FAA18760@cj20424-a.reston1.va.home.com> > I would like to add another restriction: > > A future_statement must appear on a line by itself. It is not > legal to combine a future_statement without any other statement > using a semicolon. > > It would be a bear to implement error handling for cases like this: > > from __future__ import a; import b; from __future__ import c Really?!? Why? Isn't it straightforward to check that everything you encounter in a left-to-right leaf scan of the parse tree is either a future statement or a docstring until you encounter a non-future? --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin at mems-exchange.org Tue Feb 27 23:34:06 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 17:34:06 -0500 Subject: [Python-Dev] Re: Patch uploads broken Message-ID: The SourceForge admins couldn't replicate the patch upload problem, and managed to attach a file to the Python bug report in question, yet when I try it, it still fails for me. So, a question for this list: has uploading patches or other files been working for you recently, particularly today? Maybe with more data, we can see a pattern (browser version, SSL/non-SSL, cluefulness of user, ...). If you want to try it, feel free to try attaching a file to bug #404680: https://sourceforge.net/tracker/index.php?func=detail&aid=404680&group_id=5470&atid=305470 ) The SF admin request for this problem is at http://sourceforge.net/tracker/?func=detail&atid=100001&aid=404688&group_id=1, but it's better if I collect the results and summarize them in a single comment. --amk From michel at digicool.com Tue Feb 27 23:58:56 2001 From: michel at digicool.com (Michel Pelletier) Date: Tue, 27 Feb 2001 14:58:56 -0800 (PST) Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: Message-ID: Andrew, FYI, we have seen the same problem on the SF zope-book patch tracker. I have a user who can reproduce it, like you. Would you like me to get you more info? -Michel On Tue, 27 Feb 2001, Andrew Kuchling wrote: > The SourceForge admins couldn't replicate the patch upload problem, > and managed to attach a file to the Python bug report in question, yet > when I try it, it still fails for me. So, a question for this list: > has uploading patches or other files been working for you recently, > particularly today? Maybe with more data, we can see a pattern > (browser version, SSL/non-SSL, cluefulness of user, ...). > > If you want to try it, feel free to try attaching a file to bug #404680: > https://sourceforge.net/tracker/index.php?func=detail&aid=404680&group_id=5470&atid=305470 > ) > > The SF admin request for this problem is at > http://sourceforge.net/tracker/?func=detail&atid=100001&aid=404688&group_id=1, > but it's better if I collect the results and summarize them in a > single comment. > > --amk > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > From tim.one at home.com Wed Feb 28 00:06:59 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 18:06:59 -0500 Subject: [Python-Dev] More std test breakage Message-ID: test_inspect.py still failing under -O; probably all platforms. New failure in test___all__.py, *possibly* specific to Windows, but I don't see any "termios.py" anywhere so hard to believe it could be working anywhere else either: C:\Code\python\dist\src\PCbuild>python ../lib/test/test___all__.py Traceback (most recent call last): File "../lib/test/test___all__.py", line 78, in ? check_all("getpass") File "../lib/test/test___all__.py", line 10, in check_all exec "import %s" % modname in names File " ", line 1, in ? File "c:\code\python\dist\src\lib\getpass.py", line 106, in ? import termios NameError: Case mismatch for module name termios (filename c:\code\python\dist\src\lib\TERMIOS.py) C:\Code\python\dist\src\PCbuild> From tommy at ilm.com Wed Feb 28 00:22:16 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Tue, 27 Feb 2001 15:22:16 -0800 (PST) Subject: [Python-Dev] to whoever made the termios changes... Message-ID: <15004.13862.351574.668648@mace.lucasdigital.com> I've already deleted the check-in mail and forgot who it was! Hopefully you're listening... (Fred, maybe?) I just did a cvs update and am no getting this when compiling on irix65: cc -O -OPT:Olimit=0 -I. -I/usr/u0/tommy/pycvs/python/dist/src/./Include -IInclude/ -I/usr/local/include -c /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c -o build/temp.irix-6.5-2.1/termios.o cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 320 The identifier "B230400" is undefined. {"B230400", B230400}, ^ cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 321 The identifier "CBAUDEX" is undefined. {"CBAUDEX", CBAUDEX}, ^ cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 399 The identifier "CRTSCTS" is undefined. {"CRTSCTS", CRTSCTS}, ^ cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 432 The identifier "VSWTC" is undefined. {"VSWTC", VSWTC}, ^ 4 errors detected in the compilation of "/usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c". time for an #ifdef? From jeremy at alum.mit.edu Wed Feb 28 00:27:30 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 18:27:30 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <200102271006.FAA18760@cj20424-a.reston1.va.home.com> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> Message-ID: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum writes: >> I would like to add another restriction: >> >> A future_statement must appear on a line by itself. It is not >> legal to combine a future_statement without any other statement >> using a semicolon. >> >> It would be a bear to implement error handling for cases like >> this: >> >> from __future__ import a; import b; from __future__ import c GvR> Really?!? Why? Isn't it straightforward to check that GvR> everything you encounter in a left-to-right leaf scan of the GvR> parse tree is either a future statement or a docstring until GvR> you encounter a non-future? It's not hard to find legal future statements. It's hard to find illegal ones. The pass to find future statements exits as soon as it finds something that isn't a doc string or a future. The symbol table pass detects illegal future statements by comparing the current line number against the line number of the last legal futre statement. If a mixture of legal and illegal future statements occurs on the same line, that test fails. If I want to be more precise, I can think of a couple of ways to figure out if a particular future statement occurs after the first non-import statement. Neither is particularly pretty because the parse tree is so deep by the time you get to the import statement. One possibility is to record the index of each small_stmt that occurs as a child of a simple_stmt in the symbol table. The future statement pass can record the offset of the first non-legal small_stmt when it occurs as part of an extend simple_stmt. The symbol table would also need to record the current index of each small_stmt. To implement this, I've got to touch a lot of code. The other possibility is to record the address for the first statement following the last legal future statement. The symbol table pass could test each node it visits and set a flag when this node is visited a second time. Any future statement found when the flag is set is an error. I'm concerned that it will be difficult to guarantee that this node is always checked, because the loop that walks the tree frequently dispatches to helper functions. I think each helper function would need to test. Do you have any other ideas? I haven't though about this for more than 20 minutes and was hoping to avoid more time invested on the matter. If it's a problem for Jython, though, we'll need to figure something out. Perhaps the effect of multiple future statements on a single line could be undefined, which would allow Python to raise an error and Jython to ignore the error. Not ideal, but expedient. Jeremy From ping at lfw.org Wed Feb 28 00:34:17 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 27 Feb 2001 15:34:17 -0800 (PST) Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> Message-ID: On Mon, 26 Feb 2001, Guido van Rossum wrote: > > No, it's being accepted because your name is Ping. Hooray! Thank you, Guido. :) > Now let's paaaartaaaay! You said it, brother. Welcome to the Year of the Snake. -- ?!ng From skip at mojam.com Wed Feb 28 00:39:02 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 17:39:02 -0600 (CST) Subject: [Python-Dev] More std test breakage In-Reply-To: References: Message-ID: <15004.14998.720791.657513@beluga.mojam.com> Tim> test_inspect.py still failing under -O; probably all platforms. Tim> New failure in test___all__.py, *possibly* specific to Windows, but Tim> I don't see any "termios.py" anywhere so hard to believe it could Tim> be working anywhere else either: ... NameError: Case mismatch for module name termios (filename c:\code\python\dist\src\lib\TERMIOS.py) Try cvs update. Lib/getpass.py shouldn't be trying to import TERMIOS anymore. The case mismatch you're seeing is because it can find the now defunct TERMIOS.py module but you obviously don't have the termios module. Skip From skip at mojam.com Wed Feb 28 00:48:04 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 17:48:04 -0600 (CST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15004.15540.643665.504819@beluga.mojam.com> Jeremy> The symbol table pass detects illegal future statements by Jeremy> comparing the current line number against the line number of the Jeremy> last legal futre statement. Why not just add a flag (default false at the start of the compilation) to the compiling struct that tells you if you've seen a future-killer statement already? Then if you see a future statement the compiler can whine. Skip From skip at mojam.com Wed Feb 28 00:56:47 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 17:56:47 -0600 (CST) Subject: [Python-Dev] test_symtable failing on Linux Message-ID: <15004.16063.325105.836576@beluga.mojam.com> test_symtable is failing for me: % ./python ../Lib/test/test_symtable.py Traceback (most recent call last): File "../Lib/test/test_symtable.py", line 7, in ? verify(symbols[0].name == "global") TypeError: unsubscriptable object Just cvs up'd about ten minutes ago. Skip From jeremy at alum.mit.edu Wed Feb 28 00:50:30 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 18:50:30 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <15004.15540.643665.504819@beluga.mojam.com> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> <15004.15540.643665.504819@beluga.mojam.com> Message-ID: <15004.15686.104843.418585@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro writes: Jeremy> The symbol table pass detects illegal future statements by Jeremy> comparing the current line number against the line number of Jeremy> the last legal futre statement. SM> Why not just add a flag (default false at the start of the SM> compilation) to the compiling struct that tells you if you've SM> seen a future-killer statement already? Then if you see a SM> future statement the compiler can whine. Almost everything is a future-killer statement, only doc strings and other future statements are allowed. I would have to add a st->st_future_killed = 1 for almost every node type. There are also a number of nodes (about ten) that can contain future statements or doc strings or future killers. As a result, I'd have to add special cases for them, too. Jeremy From jeremy at alum.mit.edu Wed Feb 28 00:51:37 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 18:51:37 -0500 (EST) Subject: [Python-Dev] test_symtable failing on Linux In-Reply-To: <15004.16063.325105.836576@beluga.mojam.com> References: <15004.16063.325105.836576@beluga.mojam.com> Message-ID: <15004.15753.795849.695997@w221.z064000254.bwi-md.dsl.cnc.net> This is a problem I don't know how to resolve; perhaps Andrew or Neil can. _symtablemodule.so depends on symtable.h, but setup.py doesn't know that. If you rebuild the .so, it should work. third-person-to-hit-this-problem-ly y'rs, Jeremy From greg at cosc.canterbury.ac.nz Wed Feb 28 01:01:53 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Wed, 28 Feb 2001 13:01:53 +1300 (NZDT) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102280001.NAA02075@s454.cosc.canterbury.ac.nz> > The pass to find future statements exits as soon as it > finds something that isn't a doc string or a future. Well, don't do that, then. Have the find_future_statements pass keep going and look for *illegal* future statements as well. Then, subsequent passes can just ignore any import that looks like a future statement, because it will already have been either processed or reported as an error. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From sdm7g at virginia.edu Wed Feb 28 01:03:56 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Tue, 27 Feb 2001 19:03:56 -0500 (EST) Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: On Tue, 27 Feb 2001, Tim Peters wrote: > I don't like the new MatchFilename, because it triggers on *all* platforms > that #define HAVE_DIRENT_H. I mentioned this when I originally submitted the patch. The intent was that it be *able* to compile on any unix-like platform -- in particular, I was thinking LinuxPPC was the other case I could think of where someone might want to use a HFS+ filesystem - but that Darwin/MacOSX was likely to be the only system in which that was the default. > Anyone, doesn't that trigger on straight Linux systems too (all I know is > that it's part of the Single UNIX Specification)? Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a difference in the linux dirent structs. ( I'm not sure if he caught my initial statement of intent either, but then the discussion veered into whether the patch should have been accepted at all, and then into the discussion of a general solution... ) I'm not happy with the ineffeciency either, but, as I noted, I didn't expect that it would be enabled by default elsewhere when I submitted it. ( And my goal for OSX was just to have a version that builds and doesn't crash much, so searching for a more effecient solution was going to be the next project. ) > Would rather dump MatchFilename and rewrite in terms of the old check_case > (which should run much quicker, and already comes in several appropriate > platform-aware versions -- and I clearly minimize the chance of breakage if I > stick to that time-tested code). The reason I started from scratch, you might recall, was that before I understood that the Windows semantics was intended to be different, I tried adding a Mac version of check_case, and it didn't do what I wanted. But that wasn't a problem with any of the existing check_case functions, but was due to how check_case was used. > Steven, there is a "#ifdef macintosh" version of check_case already. Will > that or won't that work correctly on your variant of Mac? If not, would you > please supply a version that does (along with the #ifdef'ery needed to > recognize your Mac variant)? One problem is that I'm aiming for a version that would work on both the open source Darwin distribution ( which is mach + BSD + some Apple extensions: Objective-C, CoreFoundation, Frameworks, ... but not most of the macosx Carbon and Cocoa libraries. ) and the full MacOSX. Thus the reason for a unix only implementation -- the info may be more easily available via MacOS FSSpec's but that's not available on vanilla Darwin. ( And I can't, for the life of me, thing of an effecient unix implementation -- UNIX file system API's + HFS+ filesystem semantics may be an unfortunate mixture! ) In other words: I can rename the current version to check_case and fix the args to match. (Although, I recall that the args to check_case were rather more awkward to handle, but I'll have to look again. ) It also probably shouldn't be "#ifdef macintosh" either, but that's a thread in itself! > Steven and Jack, does getenv() work on both your flavors of Mac? I want to > make PYTHONCASEOK work for you too. getenv() works on OSX (it's the BSD unix implementation). ( I *think* that Jack has the MacPython get the variables from Pythoprefs file settings. ) -- Steve Majewski From guido at digicool.com Tue Feb 27 13:12:18 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 07:12:18 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: Your message of "Tue, 27 Feb 2001 17:34:06 EST." References: Message-ID: <200102271212.HAA19298@cj20424-a.reston1.va.home.com> > If you want to try it, feel free to try attaching a file to bug #404680: > https://sourceforge.net/tracker/index.php?func=detail&aid=404680&group_id=5470&atid=305470 > ) > > The SF admin request for this problem is at > http://sourceforge.net/tracker/?func=detail&atid=100001&aid=404688&group_id=1, > but it's better if I collect the results and summarize them in a > single comment. My conclusion: the file upload is refused iff the comment is empty -- in other words the complaint about an empty comment is coded wrongly and should only occur when the comment is empty *and* no file is uploaded. Or maybe they want you to add a comment with your file -- that's fine too, but the error isn't very clear. http or https made no difference. I used NS 4.72 on Linux; Tim used IE and had the same results. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Wed Feb 28 01:06:55 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 19:06:55 -0500 Subject: [Python-Dev] More std test breakage In-Reply-To: <15004.14998.720791.657513@beluga.mojam.com> Message-ID: > Try cvs update. Already had. > Lib/getpass.py shouldn't be trying to import TERMIOS anymore. It isn't. It's trying to import (lowercase) termios. > The case mismatch you're seeing is because it can find the now defunct > TERMIOS.py module but you obviously don't have the termios module. Indeed I do not. Ah. But it *used* to import (uppercase) TERMIOS. That makes this a Windows thing: when it tries to import termios, it still *finds* TERMIOS.py, and on Windows that raises a NameError (instead of the ImportError you'd hope to get, if you *had* to get any error at all out of mismatching case). So this should go away, and get replaced by an ImportError, when I check in the "case-sensitive import" patch for Windows. Thanks for the nudge! From guido at digicool.com Tue Feb 27 13:21:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 07:21:11 -0500 Subject: [Python-Dev] More std test breakage In-Reply-To: Your message of "Tue, 27 Feb 2001 18:06:59 EST." References: Message-ID: <200102271221.HAA19394@cj20424-a.reston1.va.home.com> > New failure in test___all__.py, *possibly* specific to Windows, but I don't > see any "termios.py" anywhere so hard to believe it could be working anywhere > else either: > > C:\Code\python\dist\src\PCbuild>python ../lib/test/test___all__.py > Traceback (most recent call last): > File "../lib/test/test___all__.py", line 78, in ? > check_all("getpass") > File "../lib/test/test___all__.py", line 10, in check_all > exec "import %s" % modname in names > File " ", line 1, in ? > File "c:\code\python\dist\src\lib\getpass.py", line 106, in ? > import termios > NameError: Case mismatch for module name termios > (filename c:\code\python\dist\src\lib\TERMIOS.py) > > C:\Code\python\dist\src\PCbuild> Easy. There used to be a built-in termios on Unix only, and 12 different platform-specific copies of TERMIOS.py, on Unix only. We're phasing TERMIOS.py out, mocing all the symbols into termios, and as part of that we chose to remove all the platform-dependent TERMIOS.py files with a single one, in Lib, that imports the symbols from termios, for b/w compatibility. But the code that tries to see if termios exists only catches ImportError, not NameError. You can add NameError to the except clause in getpass.py, or you can proceed with your fix to the case-sensitive imports. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Wed Feb 28 01:13:42 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 19:13:42 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <200102280001.NAA02075@s454.cosc.canterbury.ac.nz> References: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> <200102280001.NAA02075@s454.cosc.canterbury.ac.nz> Message-ID: <15004.17078.793539.226783@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GE" == Greg Ewing writes: >> The pass to find future statements exits as soon as it finds >> something that isn't a doc string or a future. GE> Well, don't do that, then. Have the find_future_statements pass GE> keep going and look for *illegal* future statements as well. GE> Then, subsequent passes can just ignore any import that looks GE> like a future statement, because it will already have been GE> either processed or reported as an error. I like this idea best so far. Jeremy From guido at digicool.com Wed Feb 28 01:24:47 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:24:47 -0500 Subject: [Python-Dev] to whoever made the termios changes... In-Reply-To: Your message of "Tue, 27 Feb 2001 15:22:16 PST." <15004.13862.351574.668648@mace.lucasdigital.com> References: <15004.13862.351574.668648@mace.lucasdigital.com> Message-ID: <200102280024.TAA19492@cj20424-a.reston1.va.home.com> > I've already deleted the check-in mail and forgot who it was! > Hopefully you're listening... (Fred, maybe?) Yes, Fred. > I just did a cvs update and am no getting this when compiling on > irix65: > > cc -O -OPT:Olimit=0 -I. -I/usr/u0/tommy/pycvs/python/dist/src/./Include -IInclude/ -I/usr/local/include -c /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c -o build/temp.irix-6.5-2.1/termios.o > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 320 > The identifier "B230400" is undefined. > > {"B230400", B230400}, > ^ > > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 321 > The identifier "CBAUDEX" is undefined. > > {"CBAUDEX", CBAUDEX}, > ^ > > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 399 > The identifier "CRTSCTS" is undefined. > > {"CRTSCTS", CRTSCTS}, > ^ > > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 432 > The identifier "VSWTC" is undefined. > > {"VSWTC", VSWTC}, > ^ > > 4 errors detected in the compilation of "/usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c". > > > > time for an #ifdef? Definitely. At least these 4; maybe for every stupid symbol we're adding... --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Wed Feb 28 01:29:44 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:29:44 -0500 Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: Your message of "Tue, 27 Feb 2001 18:27:30 EST." <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102280029.TAA19538@cj20424-a.reston1.va.home.com> > >> It would be a bear to implement error handling for cases like > >> this: > >> > >> from __future__ import a; import b; from __future__ import c > > GvR> Really?!? Why? Isn't it straightforward to check that > GvR> everything you encounter in a left-to-right leaf scan of the > GvR> parse tree is either a future statement or a docstring until > GvR> you encounter a non-future? > > It's not hard to find legal future statements. It's hard to find > illegal ones. The pass to find future statements exits as soon as it > finds something that isn't a doc string or a future. The symbol table > pass detects illegal future statements by comparing the current line > number against the line number of the last legal futre statement. Aha. That's what I missed -- comparison by line number. One thing you could do would simply be check the entire current simple_statement, which would catch the above example; the possibilities are limited at that level (no blocks can start on the same line after an import). --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Wed Feb 28 01:34:32 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 19:34:32 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: [Steven D. Majewski] > ... > The intent was that it be *able* to compile on any unix-like platform -- > in particular, I was thinking LinuxPPC was the other case I could > think of where someone might want to use a HFS+ filesystem - but > that Darwin/MacOSX was likely to be the only system in which that was > the default. I don't care about LinuxPPC right now. When someone steps up to champion that platform, they can deal with it then. What I am interested in is supporting the platforms we *do* have warm bodies looking at, and not regressing on any of them. I'm surprised nobody on Linux already screamed. >> Anyone, doesn't that trigger on straight Linux systems too (all I know is >> that it's part of the Single UNIX Specification)? > Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a difference in > the linux dirent structs. ( I'm not sure if he caught my initial > statement of intent either, but then the discussion veered into whether > the patch should have been accepted at all, and then into the discussion > of a general solution... ) > > I'm not happy with the ineffeciency either, but, as I noted, I didn't > expect that it would be enabled by default elsewhere when I submitted > it. I expect it's enabled everywhere the #ifdef's in the patch enabled it . But I don't care about the past either, I want to straighten it out *now*. > ( And my goal for OSX was just to have a version that builds and > doesn't crash much, so searching for a more effecient solution was > going to be the next project. ) Then this is the right time. Play along by pretending that OSX is the special case that it is <0.9 wink>. > ... > The reason I started from scratch, you might recall, was that before I > understood that the Windows semantics was intended to be different, I > tried adding a Mac version of check_case, and it didn't do what I wanted. > But that wasn't a problem with any of the existing check_case functions, > but was due to how check_case was used. check_case will be used differently now. > ... > One problem is that I'm aiming for a version that would work on both > the open source Darwin distribution ( which is mach + BSD + some Apple > extensions: Objective-C, CoreFoundation, Frameworks, ... but not most > of the macosx Carbon and Cocoa libraries. ) and the full MacOSX. > Thus the reason for a unix only implementation -- the info may be > more easily available via MacOS FSSpec's but that's not available > on vanilla Darwin. ( And I can't, for the life of me, thing of an > effecient unix implementation -- UNIX file system API's + HFS+ filesystem > semantics may be an unfortunate mixture! ) Please just solve the problem for the platforms you're actually running on -- case-insensitive filesystems are not "Unix only" in any meaningful sense of that phrase, and each not-really-Unix platform is likely to have its own stupid gimmicks for worming around this problem anyway. For example, Cygwin defers to the Windows API. Great! That solves the problem there. Generalization is premature. > In other words: I can rename the current version to check_case and > fix the args to match. (Although, I recall that the args to check_case > were rather more awkward to handle, but I'll have to look again. ) Good! I'm not going to wait for that, though. I desperately need a nap, but when I get up I'll check in changes that should be sufficient for the Windows and Cygwin parts of this, without regressing on other platforms. We'll then have to figure out whatever #ifdef'ery is needed for your platform(s). > getenv() works on OSX (it's the BSD unix implementation). So it's *kind* of like Unix after all . > ( I *think* that Jack has the MacPython get the variables from Pythoprefs > file settings. ) Haven't heard from him, but getenv() is used freely in the Python codebase elsewhere, so I figure he's got *some* way to fake it. So I'm not worried about that anymore (until Jack screams about it). From guido at digicool.com Wed Feb 28 01:35:07 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:35:07 -0500 Subject: [Python-Dev] test_symtable failing on Linux In-Reply-To: Your message of "Tue, 27 Feb 2001 18:51:37 EST." <15004.15753.795849.695997@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.16063.325105.836576@beluga.mojam.com> <15004.15753.795849.695997@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102280035.TAA19590@cj20424-a.reston1.va.home.com> > This is a problem I don't know how to resolve; perhaps Andrew or Neil > can. _symtablemodule.so depends on symtable.h, but setup.py doesn't > know that. If you rebuild the .so, it should work. Mayby this module shouldn't be built by setup.py; it could be added to Modules/Setup.dist (all the mechanism there still works, it just isn't used for most modules; but some are still there: posix, _sre). Then you can add a regular dependency for it to the regular Makefile. This is a weakness in general of setup.py, but rarely causes a problem because the standard Python headers are pretty stable. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Wed Feb 28 01:38:15 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 19:38:15 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: <200102271212.HAA19298@cj20424-a.reston1.va.home.com> Message-ID: [Guido] > My conclusion: the file upload is refused iff the comment is empty -- > in other words the complaint about an empty comment is coded wrongly > and should only occur when the comment is empty *and* no file is > uploaded. Or maybe they want you to add a comment with your file -- > that's fine too, but the error isn't very clear. > > http or https made no difference. I used NS 4.72 on Linux; Tim used > IE and had the same results. BTW, this may be more pervasive: I recall that in the wee hours, I kept getting "ERROR: nothing changed" rejections when I was just trying to clean up old reports via doing nothing but changing the assigned-to (for example) dropdown list value. Perhaps they want a comment with every change of any kind now? From guido at digicool.com Wed Feb 28 01:46:14 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:46:14 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: Your message of "Tue, 27 Feb 2001 19:38:15 EST." References: Message-ID: <200102280046.TAA19712@cj20424-a.reston1.va.home.com> > BTW, this may be more pervasive: I recall that in the wee hours, I kept > getting "ERROR: nothing changed" rejections when I was just trying to clean > up old reports via doing nothing but changing the assigned-to (for example) > dropdown list value. Perhaps they want a comment with every change of any > kind now? Which in itself is not a bad policy. But the error sucks. --Guido van Rossum (home page: http://www.python.org/~guido/) From sdm7g at virginia.edu Wed Feb 28 02:59:56 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Tue, 27 Feb 2001 20:59:56 -0500 (EST) Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: On Tue, 27 Feb 2001, Tim Peters wrote: > Please just solve the problem for the platforms you're actually running on -- > case-insensitive filesystems are not "Unix only" in any meaningful sense of > that phrase, and each not-really-Unix platform is likely to have its own > stupid gimmicks for worming around this problem anyway. For example, Cygwin > defers to the Windows API. Great! That solves the problem there. > Generalization is premature. This isn't an attempt at abstract theorizing: I'm running Darwin with and without MacOSX on top, as well as MkLinux, LinuxPPC, and of course, various versions of "Classic" MacOS on various machines. I would gladly drop the others for MacOSX, but OSX won't run on all of the older machines. I'm hoping those machines will get replaced before I actually have to support all of those flavors, so I'm not trying to bend over backwards to be portable, but I'm also trying not to shoot myself in the foot by being overly un-general! It's not, for me, being any more premature than you wondering if the VMS users will scream at the changes. ( Although, in both cases, I think it's reasonable to say: "I thought about it -- now here's what we're going to do anyway!" I suspect that folks running Darwin on Intel are using UFS and don't want the overhead either, but I'm not even trying to generalize to them yet! ) > > In other words: I can rename the current version to check_case and > > fix the args to match. (Although, I recall that the args to check_case > > were rather more awkward to handle, but I'll have to look again. ) > > Good! I'm not going to wait for that, though. I desperately need a nap, but > when I get up I'll check in changes that should be sufficient for the Windows > and Cygwin parts of this, without regressing on other platforms. We'll then > have to figure out whatever #ifdef'ery is needed for your platform(s). __MACH__ is predefined, meaning mach system calls are supported, and __APPLE__ is predefined -- I think it means it's Apple's compiler. So: #if defined(__MACH__) && defined(__APPLE__) ought to uniquely identify Darwin, at least until Apple does another OS. ( Maybe it would be cleaner to have config add -DDarwin switches -- or if you want to get general -D$MACHDEP -- except that I don't think all the values of MACHDEP will parse as symbols. ) -- Steve Majewski From sdm7g at virginia.edu Wed Feb 28 03:16:36 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Tue, 27 Feb 2001 21:16:36 -0500 (EST) Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: On Tue, 27 Feb 2001, Tim Peters wrote: > > check_case will be used differently now. > If check_case will be used differently, then why not just use "#ifdef CHECK_IMPORT_CASE" as the switch? -- Steve Majewski From Jason.Tishler at dothill.com Wed Feb 28 04:32:16 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Tue, 27 Feb 2001 22:32:16 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: ; from tim.one@home.com on Tue, Feb 27, 2001 at 02:27:12PM -0500 References: Message-ID: <20010227223216.C252@dothill.com> Tim, On Tue, Feb 27, 2001 at 02:27:12PM -0500, Tim Peters wrote: > Jason, I *assume* that the existing "#if defined(MS_WIN32) || > defined(__CYGWIN__)" version of check_case works already for you. Scream if > that's wrong. I guess it depends on what you mean by "works." When I submitted my patch to enable case-sensitive imports for Cygwin, I mistakenly thought that I was solving import problems such as "import TERMIOS, termios". Unfortunately, I was only enabling the (old) Win32 "Case mismatch for module name foo" code for Cygwin too. Subsequently, there have been changes to Cygwin gcc that may make it difficult (i.e., require non-standard -I options) to find Win32 header files like "windows.h". So from an ease of building point of view, it would be better to stick with POSIX calls and avoid direct Win32 ones. Unfortunately, from an efficiency point of view, it sounds like this is unavoidable. I would like to test your patch with both Cygwin gcc 2.95.2-6 (i.e., Win32 friendly) and 2.95.2-7 (i.e., Unix bigot). Please let me know when it's ready. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From Jason.Tishler at dothill.com Wed Feb 28 05:01:51 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Tue, 27 Feb 2001 23:01:51 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: ; from akuchlin@mems-exchange.org on Tue, Feb 27, 2001 at 05:34:06PM -0500 References: Message-ID: <20010227230151.D252@dothill.com> On Tue, Feb 27, 2001 at 05:34:06PM -0500, Andrew Kuchling wrote: > The SourceForge admins couldn't replicate the patch upload problem, > and managed to attach a file to the Python bug report in question, yet > when I try it, it still fails for me. So, a question for this list: > has uploading patches or other files been working for you recently, > particularly today? Maybe with more data, we can see a pattern > (browser version, SSL/non-SSL, cluefulness of user, ...). I still can't upload patch files (even though I always supply a comment). Specifically, I getting the following error message in red at the top of the page after pressing the "Submit Changes" button: ArtifactFile: File name, type, size, and data are RequiredSuccessfully Updated FWIW, I'm using Netscape 4.72 on Windows. Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From tim.one at home.com Wed Feb 28 05:08:05 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 23:08:05 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: >> check_case will be used differently now. [Steven] > If check_case will be used differently, then why not just use > "#ifdef CHECK_IMPORT_CASE" as the switch? Sorry, I don't understand what you have in mind. In my mind, CHECK_IMPORT_CASE goes away, since we're attempting to get the same semantics on all platforms, and a yes/no #define doesn't carry enough info to accomplish that. From tim.one at home.com Wed Feb 28 05:29:33 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 23:29:33 -0500 Subject: [Python-Dev] RE: Case-sensitive import In-Reply-To: <20010227223216.C252@dothill.com> Message-ID: [Tim] >> Jason, I *assume* that the existing "#if defined(MS_WIN32) || >> defined(__CYGWIN__)" version of check_case works already for >> you. Scream if that's wrong. [Jason] > I guess it depends on what you mean by "works." I meant that independent of errors you don't want to see, and independent of the allcaps8x3 silliness, check_case returns 1 if there's a case-sensitive match and 0 if not. > When I submitted my patch to enable case-sensitive imports for Cygwin, > I mistakenly thought that I was solving import problems such as "import > TERMIOS, termios". Unfortunately, I was only enabling the (old) Win32 > "Case mismatch for module name foo" code for Cygwin too. Then if you succeeded in enabling that, "it works" in the sense I meant. My intent is to stop the errors, take away the allcaps8x3 stuff, and change the *calling* code to just keep going when check_case returns 0. > Subsequently, there have been changes to Cygwin gcc that may make it > difficult (i.e., require non-standard -I options) to find Win32 header > files like "windows.h". So from an ease of building point of view, it > would be better to stick with POSIX calls and avoid direct Win32 ones. > Unfortunately, from an efficiency point of view, it sounds like this is > unavoidable. > > I would like to test your patch with both Cygwin gcc 2.95.2-6 (i.e., > Win32 friendly) and 2.95.2-7 (i.e., Unix bigot). Please let me know > when it's ready. Not terribly long after I get to stop writing email <0.9 wink>. But since the only platform I can test here is plain Windows, and Cygwin and sundry Mac variations appear to be moving targets, once it works on Windows I'm just going to check it in. You and Steven will then have to figure out what you need to do on your platforms. OK by me if you two recreate the HAVE_DIRENT_H stuff, but (a) not if Linux takes that path too; and, (b) if Cygwin ends up using that, please get rid of the Cygwin-specific tricks in the plain Windows case (this module is already one of the hardest to maintain, and having random pieces of #ifdef'ed code in it that will never be used hurts). From barry at digicool.com Wed Feb 28 06:05:30 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Wed, 28 Feb 2001 00:05:30 -0500 Subject: [Python-Dev] Case-sensitive import References: Message-ID: <15004.34586.744058.938851@anthem.wooz.org> >>>>> "SDM" == Steven D Majewski writes: SDM> Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a SDM> difference in the linux dirent structs. Actually, my Linux distro's dirent.h has almost the same test on _DIRENT_HAVE_D_NAMLEN (sic) -- which looking again now at import.c it's obvious I misspelled it! Tim, if you clean this code up and decide to continue to use the d_namlen slot, please fix the macro test. -Barry From akuchlin at cnri.reston.va.us Wed Feb 28 06:21:54 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Wed, 28 Feb 2001 00:21:54 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: <20010227230151.D252@dothill.com>; from Jason.Tishler@dothill.com on Tue, Feb 27, 2001 at 11:01:51PM -0500 References: <20010227230151.D252@dothill.com> Message-ID: <20010228002154.A16737@newcnri.cnri.reston.va.us> On Tue, Feb 27, 2001 at 11:01:51PM -0500, Jason Tishler wrote: >I still can't upload patch files (even though I always supply a comment). >Specifically, I getting the following error message in red at the top >of the page after pressing the "Submit Changes" button: Same here. It's not from leaving the comment field empty (I got the error message too and figured out what it meant); instead I can fill in a comment, select a file, and upload it. The comment shows up; the attachment doesn't (using NS4.75 on Linux). Anyone got other failures to report? --amk From jeremy at alum.mit.edu Wed Feb 28 06:28:08 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 00:28:08 -0500 (EST) Subject: [Python-Dev] puzzled about old checkin to pythonrun.c Message-ID: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> Fred, You made a change to the syntax error generation code last August. I don't understand what the code is doing. It appears that the code you added is redundant, but it's hard to tell for sure because responsbility for generating well-formed SyntaxErrors is spread across several files. The code you added in pythonrun.c, line 1084, in err_input(), starts with the test (v != NULL): w = Py_BuildValue("(sO)", msg, v); PyErr_SetObject(errtype, w); Py_XDECREF(w); if (v != NULL) { PyObject *exc, *tb; PyErr_Fetch(&errtype, &exc, &tb); PyErr_NormalizeException(&errtype, &exc, &tb); if (PyObject_SetAttrString(exc, "filename", PyTuple_GET_ITEM(v, 0))) PyErr_Clear(); if (PyObject_SetAttrString(exc, "lineno", PyTuple_GET_ITEM(v, 1))) PyErr_Clear(); if (PyObject_SetAttrString(exc, "offset", PyTuple_GET_ITEM(v, 2))) PyErr_Clear(); Py_DECREF(v); PyErr_Restore(errtype, exc, tb); } What's weird about this code is that the __init__ code for a SyntaxError (all errors will be SyntaxErrors at this point) sets filename, lineno, and offset. Each of the values is passed to the constructor as the tuple v; then the new code gets the items out of the tuple and sets the explicitly. You also made a bunch of changes to SyntaxError__str__ at the same time. I wonder if they were sufficient to fix the bug (which has tracker aid 210628 incidentally). Can you shed any light? Jeremy From tim.one at home.com Wed Feb 28 06:48:57 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 00:48:57 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: Here's the checkin comment for rev 2.163 of import.c: """ Implement PEP 235: Import on Case-Insensitive Platforms. http://python.sourceforge.net/peps/pep-0235.html Renamed check_case to case_ok. Substantial code rearrangement to get this stuff in one place in the file. Innermost loop of find_module() now much simpler and #ifdef-free, and I want to keep it that way (it's bad enough that the innermost loop is itself still in an #ifdef!). Windows semantics tested and are fine. Jason, Cygwin *should* be fine if and only if what you did for check_case() before still "works". Jack, the semantics on your flavor of Mac have definitely changed (see the PEP), and need to be tested. The intent is that your flavor of Mac now work the same as everything else in the "lower left" box, including respecting PYTHONCASEOK. There is a non-zero chance that I already changed the "#ifdef macintosh" code correctly to achieve that. Steven, sorry, you did the most work here so far but you got screwed the worst. Happy to work with you on repairing it, but I don't understand anything about all your Mac variants and don't have time to learn before the beta. We need to add another branch (or two, three, ...?) inside case_ok for you. But we should not need to change anything else. """ Someone please check Linux etc too, although everything that doesn't match one of these #ifdef's: #if defined(MS_WIN32) || defined(__CYGWIN__) #elif defined(DJGPP) #elif defined(macintosh) *should* act as if the platform filesystem were case-sensitive (i.e., that if fopen() succeeds, the case must match already and so there's no need for any more work to check that). Jason, if Cygwin is broken, please coordinate with Steven since you two appear to have similar problems then. [Steven] > __MACH__ is predefined, meaning mach system calls are supported, and > __APPLE__ is predefined -- I think it means it's Apple's compiler. So: > > #if defined(__MACH__) && defined(__APPLE__) > > ought to uniquely identify Darwin, at least until Apple does another OS. > > ( Maybe it would be cleaner to have config add -DDarwin switches -- or > if you want to get general -D$MACHDEP -- except that I don't think all > the values of MACHDEP will parse as symbols. ) This is up to you. I'm sorry to have broken your old code, but Barry should not have accepted it to begin with . Speaking of which, [Barry] > SDM> Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a > SDM> difference in the linux dirent structs. > > Actually, my Linux distro's dirent.h has almost the same test on > _DIRENT_HAVE_D_NAMLEN (sic) -- which looking again now at import.c > it's obvious I misspelled it! > > Tim, if you clean this code up and decide to continue to use the > d_namlen slot, please fix the macro test. For now, I didn't change anything in the MatchFilename function, but put the entire thing in an "#if 0" block with an "XXX" comment, to make it easy for Steven and/or Jason to get at that source if one or both decide their platforms still need something like that. If they do, I'll double-check that this #define is spelled correctly when they check in their changes; else I'll delete that block before the release. Aren't release crunches great? Afraid they're infectious <0.5 wink>. From fdrake at acm.org Wed Feb 28 07:50:28 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Wed, 28 Feb 2001 01:50:28 -0500 (EST) Subject: [Python-Dev] Re: puzzled about old checkin to pythonrun.c In-Reply-To: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15004.40884.236605.266085@cj42289-a.reston1.va.home.com> Jeremy Hylton writes: > Can you shed any light? Not at this hour -- fading fast. I'll look at it in the morning. -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From moshez at zadka.site.co.il Wed Feb 28 11:43:08 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Wed, 28 Feb 2001 12:43:08 +0200 (IST) Subject: [Python-Dev] urllib2 and urllib Message-ID: <20010228104308.BAB5BAA6A@darjeeling.zadka.site.co.il> (Full disclosure: I've been payed to hack on urllib2) For a long time I've been feeling that urllib is a bit hackish, and not really suited to conveniently script web sites. The classic example is the interface to passwords, whose default behaviour is to stop and ask the user(!). Jeremy had urllib2 out for about a year and a half, and now that I've finally managed to have a look at it, I'm very impressed with the architecture, and I think it's superior to urllib. From pedroni at inf.ethz.ch Wed Feb 28 15:21:35 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 28 Feb 2001 15:21:35 +0100 (MET) Subject: [Python-Dev] pdb and nested scopes Message-ID: <200102281421.PAA17150@core.inf.ethz.ch> Hi. Sorry if everybody is already aware of this. I have checked the code for pdb in CVS , especially for the p cmd, maybe I'm wrong but given actual the implementation of things that gives no access to the value of free or cell variables. Should that be fixed? AFAIK pdb as it is works with jython too. So when fixing that, it would be nice if this would be preserved. regards, Samuele Pedroni. From jack at oratrix.nl Wed Feb 28 15:30:37 2001 From: jack at oratrix.nl (Jack Jansen) Date: Wed, 28 Feb 2001 15:30:37 +0100 Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message by barry@digicool.com (Barry A. Warsaw) , Wed, 28 Feb 2001 00:05:30 -0500 , <15004.34586.744058.938851@anthem.wooz.org> Message-ID: <20010228143037.8F32D371690@snelboot.oratrix.nl> Why don't we handle this the same way as, say, PyOS_CheckStack()? I.e. if USE_CHECK_IMPORT_CASE is defined it is necessary to check the case of the imported file (i.e. it's not defined on vanilla unix, defined on most other platforms) and if it is defined we call PyOS_CheckCase(filename, modulename). All these routines can be in different files, for all I care, similar to the dynload_*.c files. -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm From guido at digicool.com Wed Feb 28 16:34:52 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 10:34:52 -0500 Subject: [Python-Dev] pdb and nested scopes In-Reply-To: Your message of "Wed, 28 Feb 2001 15:21:35 +0100." <200102281421.PAA17150@core.inf.ethz.ch> References: <200102281421.PAA17150@core.inf.ethz.ch> Message-ID: <200102281534.KAA28532@cj20424-a.reston1.va.home.com> > Hi. > > Sorry if everybody is already aware of this. No, it's new to me. > I have checked the code for pdb in CVS , especially for the p cmd, > maybe I'm wrong but given actual the implementation of things that > gives no access to the value of free or cell variables. Should that > be fixed? I think so. I've noted that the locals() function also doesn't see cell variables: from __future__ import nested_scopes import pdb def f(): a = 12 print locals() def g(): print a g() a = 100 g() #pdb.set_trace() f() This prints {} 12 100 When I enable the pdb.set_trace() call, indeed the variable a is not found. > AFAIK pdb as it is works with jython too. So when fixing that, it would > be nice if this would be preserved. Yes! --Guido van Rossum (home page: http://www.python.org/~guido/) From Jason.Tishler at dothill.com Wed Feb 28 18:02:29 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Wed, 28 Feb 2001 12:02:29 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: ; from tim.one@home.com on Tue, Feb 27, 2001 at 11:29:33PM -0500 References: <20010227223216.C252@dothill.com> Message-ID: <20010228120229.M449@dothill.com> Tim, On Tue, Feb 27, 2001 at 11:29:33PM -0500, Tim Peters wrote: > Not terribly long after I get to stop writing email <0.9 wink>. But since > the only platform I can test here is plain Windows, and Cygwin and sundry Mac > variations appear to be moving targets, once it works on Windows I'm just > going to check it in. You and Steven will then have to figure out what you > need to do on your platforms. I tested your changes on Cygwin and they work correctly. Thanks very much. Unfortunately, my concerns about building due to your implementation using direct Win32 APIs were realized. This delayed my response. The current Python CVS stills builds OOTB (with the exception of termios) with the current Cygwin gcc (i.e., 2.95.2-6). However, using the next Cygwin gcc (i.e., 2.95.2-8 or later) will require one to configure with: CC='gcc -mwin32' configure ... and the following minor patch be accepted: http://sourceforge.net/tracker/index.php?func=detail&aid=404928&group_id=5470&atid=305470 Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From guido at digicool.com Wed Feb 28 18:12:05 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 12:12:05 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: Your message of "Wed, 28 Feb 2001 12:02:29 EST." <20010228120229.M449@dothill.com> References: <20010227223216.C252@dothill.com> <20010228120229.M449@dothill.com> Message-ID: <200102281712.MAA29568@cj20424-a.reston1.va.home.com> > and the following minor patch be accepted: > > http://sourceforge.net/tracker/index.php?func=detail&aid=404928&group_id=5470&atid=305470 That patch seems fine -- except that I'd like /F to have a quick look since it changes _sre.c. --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik at pythonware.com Wed Feb 28 18:36:09 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 28 Feb 2001 18:36:09 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _sre.c References: Message-ID: <048b01c0a1ac$f10cf920$e46940d5@hagrid> tim indirectly wrote: > *** _sre.c 2001/01/16 07:37:30 2.52 > --- _sre.c 2001/02/28 16:44:18 2.53 > *************** > *** 2370,2377 **** > }; > > ! void > ! #if defined(WIN32) > ! __declspec(dllexport) > ! #endif > init_sre(void) > { > --- 2370,2374 ---- > }; > > ! DL_EXPORT(void) > init_sre(void) > { after this change, the separate makefile I use to build _sre on Windows no longer works (init_sre isn't exported). I don't really understand the code in config.h, but I've tried defining USE_DL_EXPORT (gives linking problems) and USE_DL_IMPORT (macro redefinition). any ideas? Cheers /F From tim.one at home.com Wed Feb 28 18:36:45 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 12:36:45 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: <20010228120229.M449@dothill.com> Message-ID: [Jason] > I tested your changes on Cygwin and they work correctly. Thanks very much. Good! I guess that just leaves poor Steven hanging (although I've got ~200 emails I haven't gotten to yet, so maybe he's already pulled himself up). > Unfortunately, my concerns about building due to your implementation using > direct Win32 APIs were realized. This delayed my response. > > The current Python CVS stills builds OOTB (with the exception of termios) > with the current Cygwin gcc (i.e., 2.95.2-6). However, using the next > Cygwin gcc (i.e., 2.95.2-8 or later) will require one to configure with: > > CC='gcc -mwin32' configure ... > > and the following minor patch be accepted: > > http://sourceforge.net/tracker/index.php?func=detail&aid=404928&gro > up_id=5470&atid=305470 I checked that patch in already, about 15 minutes after you uploaded it. Is this service, or what?! [Guido] > That patch seems fine -- except that I'd like /F to have a quick look > since it changes _sre.c. Too late and no need. What Jason did to _sre.c is *undo* some Cygwin special-casing; /F will like that. It's trivial anyway. Jason, about this: However, using the next Cygwin gcc (i.e., 2.95.2-8 or later) will require one to configure with: CC='gcc -mwin32' configure ... How can we make that info *useful* to people? The target audience for the Cygwin port probably doesn't search Python-Dev or the Python patches database. So it would be good if you thought about uploading an informational patch to README and Misc/NEWS briefly telling Cygwin folks what they need to know. If you do, I'll look for it and check it in. From tim.one at home.com Wed Feb 28 18:42:12 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 12:42:12 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _sre.c In-Reply-To: <048b01c0a1ac$f10cf920$e46940d5@hagrid> Message-ID: >> *** _sre.c 2001/01/16 07:37:30 2.52 >> --- _sre.c 2001/02/28 16:44:18 2.53 >> *************** >> *** 2370,2377 **** >> }; >> >> ! void >> ! #if defined(WIN32) >> ! __declspec(dllexport) >> ! #endif >> init_sre(void) >> { >> --- 2370,2374 ---- >> }; >> >> ! DL_EXPORT(void) >> init_sre(void) >> { [/F] > after this change, the separate makefile I use to build _sre > on Windows no longer works (init_sre isn't exported). Oops! I tested it on Windows, so it works OK with the std build. > I don't really understand the code in config.h, Nobody does, alas. Mark Hammond and I have a delayed date to rework that. > but I've tried defining USE_DL_EXPORT (gives linking problems) and > USE_DL_IMPORT (macro redefinition). Sounds par for the course. > any ideas? Ya: the great thing about all these macros is that they're usually worse than useless (you try them, they break something). The _sre project has /export:init_sre buried in its link options. DL_EXPORT(void) expands to void. Not pretty, but it's the way everything else (outside the pythoncore project) works too. From jeremy at alum.mit.edu Wed Feb 28 18:58:58 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 12:58:58 -0500 (EST) Subject: [Python-Dev] PEP 227 (was Re: Nested scopes resolution -- you can breathe again!) In-Reply-To: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> References: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> Message-ID: <15005.15458.703037.373890@w221.z064000254.bwi-md.dsl.cnc.net> Last week Guido sent a message about our decisions to make nested scopes an optional feature for 2.1 in advance of their mandatory introduction in Python 2.2. I've included an ndiff of the PEP for reference. The beta release on Friday will contain the features as described in the PEP. Jeremy -: old-pep-0227.txt +: pep-0227.txt PEP: 227 Title: Statically Nested Scopes - Version: $Revision: 1.6 $ ? ^ + Version: $Revision: 1.7 $ ? ^ Author: jeremy at digicool.com (Jeremy Hylton) Status: Draft Type: Standards Track Python-Version: 2.1 Created: 01-Nov-2000 Post-History: Abstract This PEP proposes the addition of statically nested scoping (lexical scoping) for Python 2.1. The current language definition defines exactly three namespaces that are used to resolve names -- the local, global, and built-in namespaces. The addition of nested scopes would allow resolution of unbound local names in enclosing functions' namespaces. One consequence of this change that will be most visible to Python programs is that lambda statements could reference variables in the namespaces where the lambda is defined. Currently, a lambda statement uses default arguments to explicitly creating bindings in the lambda's namespace. Introduction This proposal changes the rules for resolving free variables in - Python functions. The Python 2.0 definition specifies exactly - three namespaces to check for each name -- the local namespace, - the global namespace, and the builtin namespace. According to - this defintion, if a function A is defined within a function B, - the names bound in B are not visible in A. The proposal changes - the rules so that names bound in B are visible in A (unless A + Python functions. The new name resolution semantics will take + effect with Python 2.2. These semantics will also be available in + Python 2.1 by adding "from __future__ import nested_scopes" to the + top of a module. + + The Python 2.0 definition specifies exactly three namespaces to + check for each name -- the local namespace, the global namespace, + and the builtin namespace. According to this definition, if a + function A is defined within a function B, the names bound in B + are not visible in A. The proposal changes the rules so that + names bound in B are visible in A (unless A contains a name - contains a name binding that hides the binding in B). ? ---------------- + binding that hides the binding in B). The specification introduces rules for lexical scoping that are common in Algol-like languages. The combination of lexical scoping and existing support for first-class functions is reminiscent of Scheme. The changed scoping rules address two problems -- the limited - utility of lambda statements and the frequent confusion of new + utility of lagmbda statements and the frequent confusion of new ? + users familiar with other languages that support lexical scoping, e.g. the inability to define recursive functions except at the module level. + + XXX Konrad Hinsen suggests that this section be expanded The lambda statement introduces an unnamed function that contains a single statement. It is often used for callback functions. In the example below (written using the Python 2.0 rules), any name used in the body of the lambda must be explicitly passed as a default argument to the lambda. from Tkinter import * root = Tk() Button(root, text="Click here", command=lambda root=root: root.test.configure(text="...")) This approach is cumbersome, particularly when there are several names used in the body of the lambda. The long list of default arguments obscure the purpose of the code. The proposed solution, in crude terms, implements the default argument approach automatically. The "root=root" argument can be omitted. + The new name resolution semantics will cause some programs to + behave differently than they did under Python 2.0. In some cases, + programs will fail to compile. In other cases, names that were + previously resolved using the global namespace will be resolved + using the local namespace of an enclosing function. In Python + 2.1, warnings will be issued for all program statement that will + behave differently. + Specification Python is a statically scoped language with block structure, in the traditional of Algol. A code block or region, such as a - module, class defintion, or function body, is the basic unit of a + module, class definition, or function body, is the basic unit of a ? + program. Names refer to objects. Names are introduced by name binding operations. Each occurrence of a name in the program text refers to the binding of that name established in the innermost function block containing the use. The name binding operations are assignment, class and function definition, and import statements. Each assignment or import statement occurs within a block defined by a class or function definition or at the module level (the top-level code block). If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. (Note: This can lead to errors when a name is used within a block before it is bound.) If the global statement occurs within a block, all uses of the name specified in the statement refer to the binding of that name in the top-level namespace. Names are resolved in the top-level namespace by searching the global namespace, the namespace of the module containing the code block, and the builtin namespace, the namespace of the module __builtin__. The global namespace is searched first. If the name is not found there, the builtin - namespace is searched. + namespace is searched. The global statement must precede all uses + of the name. If a name is used within a code block, but it is not bound there and is not declared global, the use is treated as a reference to the nearest enclosing function region. (Note: If a region is contained within a class definition, the name bindings that occur in the class block are not visible to enclosed functions.) A class definition is an executable statement that may uses and definitions of names. These references follow the normal rules for name resolution. The namespace of the class definition becomes the attribute dictionary of the class. The following operations are name binding operations. If they occur within a block, they introduce new local names in the current block unless there is also a global declaration. - Function defintion: def name ... + Function definition: def name ... ? + Class definition: class name ... Assignment statement: name = ... Import statement: import name, import module as name, from module import name Implicit assignment: names are bound by for statements and except clauses The arguments of a function are also local. There are several cases where Python statements are illegal when used in conjunction with nested scopes that contain free variables. If a variable is referenced in an enclosing scope, it is an error to delete the name. The compiler will raise a SyntaxError for 'del name'. - If the wildcard form of import (import *) is used in a function + If the wild card form of import (import *) is used in a function ? + and the function contains a nested block with free variables, the compiler will raise a SyntaxError. If exec is used in a function and the function contains a nested block with free variables, the compiler will raise a SyntaxError - unless the exec explicit specifies the local namespace for the + unless the exec explicitly specifies the local namespace for the ? ++ exec. (In other words, "exec obj" would be illegal, but "exec obj in ns" would be legal.) + If a name bound in a function scope is also the name of a module + global name or a standard builtin name and the function contains a + nested function scope that references the name, the compiler will + issue a warning. The name resolution rules will result in + different bindings under Python 2.0 than under Python 2.2. The + warning indicates that the program may not run correctly with all + versions of Python. + Discussion The specified rules allow names defined in a function to be referenced in any nested function defined with that function. The name resolution rules are typical for statically scoped languages, with three primary exceptions: - Names in class scope are not accessible. - The global statement short-circuits the normal rules. - Variables are not declared. Names in class scope are not accessible. Names are resolved in - the innermost enclosing function scope. If a class defintion + the innermost enclosing function scope. If a class definition ? + occurs in a chain of nested scopes, the resolution process skips class definitions. This rule prevents odd interactions between class attributes and local variable access. If a name binding - operation occurs in a class defintion, it creates an attribute on + operation occurs in a class definition, it creates an attribute on ? + the resulting class object. To access this variable in a method, or in a function nested within a method, an attribute reference must be used, either via self or via the class name. An alternative would have been to allow name binding in class scope to behave exactly like name binding in function scope. This rule would allow class attributes to be referenced either via attribute reference or simple name. This option was ruled out because it would have been inconsistent with all other forms of class and instance attribute access, which always use attribute references. Code that used simple names would have been obscure. The global statement short-circuits the normal rules. Under the proposal, the global statement has exactly the same effect that it - does for Python 2.0. It's behavior is preserved for backwards ? - + does for Python 2.0. Its behavior is preserved for backwards compatibility. It is also noteworthy because it allows name binding operations performed in one block to change bindings in another block (the module). Variables are not declared. If a name binding operation occurs anywhere in a function, then that name is treated as local to the function and all references refer to the local binding. If a reference occurs before the name is bound, a NameError is raised. The only kind of declaration is the global statement, which allows programs to be written using mutable global variables. As a consequence, it is not possible to rebind a name defined in an enclosing scope. An assignment operation can only bind a name in the current scope or in the global scope. The lack of declarations and the inability to rebind names in enclosing scopes are unusual for lexically scoped languages; there is typically a mechanism to create name bindings (e.g. lambda and let in Scheme) and a mechanism to change the bindings (set! in Scheme). XXX Alex Martelli suggests comparison with Java, which does not allow name bindings to hide earlier bindings. Examples A few examples are included to illustrate the way the rules work. XXX Explain the examples >>> def make_adder(base): ... def adder(x): ... return base + x ... return adder >>> add5 = make_adder(5) >>> add5(6) 11 >>> def make_fact(): ... def fact(n): ... if n == 1: ... return 1L ... else: ... return n * fact(n - 1) ... return fact >>> fact = make_fact() >>> fact(7) 5040L >>> def make_wrapper(obj): ... class Wrapper: ... def __getattr__(self, attr): ... if attr[0] != '_': ... return getattr(obj, attr) ... else: ... raise AttributeError, attr ... return Wrapper() >>> class Test: ... public = 2 ... _private = 3 >>> w = make_wrapper(Test()) >>> w.public 2 >>> w._private Traceback (most recent call last): File " ", line 1, in ? AttributeError: _private - An example from Tim Peters of the potential pitfalls of nested scopes ? ^ -------------- + An example from Tim Peters demonstrates the potential pitfalls of ? +++ ^^^^^^^^ - in the absence of declarations: + nested scopes in the absence of declarations: ? ++++++++++++++ i = 6 def f(x): def g(): print i # ... # skip to the next page # ... for i in x: # ah, i *is* local to f, so this is what g sees pass g() The call to g() will refer to the variable i bound in f() by the for loop. If g() is called before the loop is executed, a NameError will be raised. XXX need some counterexamples Backwards compatibility There are two kinds of compatibility problems caused by nested scopes. In one case, code that behaved one way in earlier - versions, behaves differently because of nested scopes. In the ? - + versions behaves differently because of nested scopes. In the other cases, certain constructs interact badly with nested scopes and will trigger SyntaxErrors at compile time. The following example from Skip Montanaro illustrates the first kind of problem: x = 1 def f1(): x = 2 def inner(): print x inner() Under the Python 2.0 rules, the print statement inside inner() refers to the global variable x and will print 1 if f1() is called. Under the new rules, it refers to the f1()'s namespace, the nearest enclosing scope with a binding. The problem occurs only when a global variable and a local variable share the same name and a nested function uses that name to refer to the global variable. This is poor programming practice, because readers will easily confuse the two different variables. One example of this problem was found in the Python standard library during the implementation of nested scopes. To address this problem, which is unlikely to occur often, a static analysis tool that detects affected code will be written. - The detection problem is straightfoward. + The detection problem is straightforward. ? + - The other compatibility problem is casued by the use of 'import *' ? - + The other compatibility problem is caused by the use of 'import *' ? + and 'exec' in a function body, when that function contains a nested scope and the contained scope has free variables. For example: y = 1 def f(): exec "y = 'gotcha'" # or from module import * def g(): return y ... At compile-time, the compiler cannot tell whether an exec that - operators on the local namespace or an import * will introduce ? ^^ + operates on the local namespace or an import * will introduce ? ^ name bindings that shadow the global y. Thus, it is not possible to tell whether the reference to y in g() should refer to the global or to a local name in f(). In discussion of the python-list, people argued for both possible interpretations. On the one hand, some thought that the reference in g() should be bound to a local y if one exists. One problem with this interpretation is that it is impossible for a human reader of the code to determine the binding of y by local inspection. It seems likely to introduce subtle bugs. The other interpretation is to treat exec and import * as dynamic features that do not effect static scoping. Under this interpretation, the exec and import * would introduce local names, but those names would never be visible to nested scopes. In the specific example above, the code would behave exactly as it did in earlier versions of Python. - Since each interpretation is problemtatic and the exact meaning ? - + Since each interpretation is problematic and the exact meaning ambiguous, the compiler raises an exception. A brief review of three Python projects (the standard library, Zope, and a beta version of PyXPCOM) found four backwards compatibility issues in approximately 200,000 lines of code. There was one example of case #1 (subtle behavior change) and two examples of import * problems in the standard library. (The interpretation of the import * and exec restriction that was implemented in Python 2.1a2 was much more restrictive, based on language that in the reference manual that had never been enforced. These restrictions were relaxed following the release.) + Compatibility of C API + + The implementation causes several Python C API functions to + change, including PyCode_New(). As a result, C extensions may + need to be updated to work correctly with Python 2.1. + locals() / vars() These functions return a dictionary containing the current scope's local variables. Modifications to the dictionary do not affect the values of variables. Under the current rules, the use of locals() and globals() allows the program to gain access to all the namespaces in which names are resolved. An analogous function will not be provided for nested scopes. Under this proposal, it will not be possible to gain dictionary-style access to all visible scopes. + Warnings and Errors + + The compiler will issue warnings in Python 2.1 to help identify + programs that may not compile or run correctly under future + versions of Python. Under Python 2.2 or Python 2.1 if the + nested_scopes future statement is used, which are collectively + referred to as "future semantics" in this section, the compiler + will issue SyntaxErrors in some cases. + + The warnings typically apply when a function that contains a + nested function that has free variables. For example, if function + F contains a function G and G uses the builtin len(), then F is a + function that contains a nested function (G) with a free variable + (len). The label "free-in-nested" will be used to describe these + functions. + + import * used in function scope + + The language reference specifies that import * may only occur + in a module scope. (Sec. 6.11) The implementation of C + Python has supported import * at the function scope. + + If import * is used in the body of a free-in-nested function, + the compiler will issue a warning. Under future semantics, + the compiler will raise a SyntaxError. + + bare exec in function scope + + The exec statement allows two optional expressions following + the keyword "in" that specify the namespaces used for locals + and globals. An exec statement that omits both of these + namespaces is a bare exec. + + If a bare exec is used in the body of a free-in-nested + function, the compiler will issue a warning. Under future + semantics, the compiler will raise a SyntaxError. + + local shadows global + + If a free-in-nested function has a binding for a local + variable that (1) is used in a nested function and (2) is the + same as a global variable, the compiler will issue a warning. + Rebinding names in enclosing scopes There are technical issues that make it difficult to support rebinding of names in enclosing scopes, but the primary reason that it is not allowed in the current proposal is that Guido is opposed to it. It is difficult to support, because it would require a new mechanism that would allow the programmer to specify that an assignment in a block is supposed to rebind the name in an enclosing block; presumably a keyword or special syntax (x := 3) would make this possible. The proposed rules allow programmers to achieve the effect of rebinding, albeit awkwardly. The name that will be effectively rebound by enclosed functions is bound to a container object. In place of assignment, the program uses modification of the container to achieve the desired effect: def bank_account(initial_balance): balance = [initial_balance] def deposit(amount): balance[0] = balance[0] + amount return balance def withdraw(amount): balance[0] = balance[0] - amount return balance return deposit, withdraw Support for rebinding in nested scopes would make this code clearer. A class that defines deposit() and withdraw() methods and the balance as an instance variable would be clearer still. Since classes seem to achieve the same effect in a more straightforward manner, they are preferred. Implementation The implementation for C Python uses flat closures [1]. Each def or lambda statement that is executed will create a closure if the body of the function or any contained function has free variables. Using flat closures, the creation of closures is somewhat expensive but lookup is cheap. The implementation adds several new opcodes and two new kinds of names in code objects. A variable can be either a cell variable or a free variable for a particular code object. A cell variable is referenced by containing scopes; as a result, the function where it is defined must allocate separate storage for it on each - invocation. A free variable is reference via a function's closure. ? --------- + invocation. A free variable is referenced via a function's ? + + closure. + + The choice of free closures was made based on three factors. + First, nested functions are presumed to be used infrequently, + deeply nested (several levels of nesting) still less frequently. + Second, lookup of names in a nested scope should be fast. + Third, the use of nested scopes, particularly where a function + that access an enclosing scope is returned, should not prevent + unreferenced objects from being reclaimed by the garbage + collector. XXX Much more to say here References [1] Luca Cardelli. Compiling a functional language. In Proc. of the 1984 ACM Conference on Lisp and Functional Programming, pp. 208-217, Aug. 1984 http://citeseer.nj.nec.com/cardelli84compiling.html From tim.one at home.com Wed Feb 28 19:48:39 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 13:48:39 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: <20010228143037.8F32D371690@snelboot.oratrix.nl> Message-ID: [Jack Jansen] > Why don't we handle this the same way as, say, PyOS_CheckStack()? > > I.e. if USE_CHECK_IMPORT_CASE is defined it is necessary to check > the case of the imported file (i.e. it's not defined on vanilla > unix, defined on most other platforms) and if it is defined we call > PyOS_CheckCase(filename, modulename). > All these routines can be in different files, for all I care, > similar to the dynload_*.c files. A. I want the code in the CVS tree. That some of your Mac code is not in the CVS tree creates problems for everyone (we can never guess whether we're breaking your code because we have no idea what your code is). B. PyOS_CheckCase() is not of general use. It's only of interest inside import.c, so it's better to live there as a static function. C. I very much enjoyed getting rid of the obfuscating #ifdef CHECK_IMPORT_CASE blocks in import.c! This code is hard enough to follow without distributing preprocessor tricks all over the place. Now they live only inside the body of case_ok(), where they're truly needed. That is, case_ok() is a perfectly sensible cross-platfrom abstraction, and *calling* code doesn't need to be bothered with how it's implemented-- or even whether it's needed --on various platfroms. On Linux, case_ok() reduces to the one-liner "return 1;", and I don't mind paying a function call in return for the increase in clarity inside find_module(). D. The schedule says we release the beta tomorrow <0.6 wink>. From Jason.Tishler at dothill.com Wed Feb 28 20:41:37 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Wed, 28 Feb 2001 14:41:37 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _sre.c In-Reply-To: <048b01c0a1ac$f10cf920$e46940d5@hagrid>; from fredrik@pythonware.com on Wed, Feb 28, 2001 at 06:36:09PM +0100 References: <048b01c0a1ac$f10cf920$e46940d5@hagrid> Message-ID: <20010228144137.P449@dothill.com> Fredrik, On Wed, Feb 28, 2001 at 06:36:09PM +0100, Fredrik Lundh wrote: > tim indirectly wrote: > > > *** _sre.c 2001/01/16 07:37:30 2.52 > > --- _sre.c 2001/02/28 16:44:18 2.53 > [snip] > > after this change, the separate makefile I use to build _sre > on Windows no longer works (init_sre isn't exported). > > I don't really understand the code in config.h, but I've tried > defining USE_DL_EXPORT (gives linking problems) and > USE_DL_IMPORT (macro redefinition). USE_DL_EXPORT is to be defined only when building the Win32 (and Cygwin) DLL core not when building extensions. When building Win32 Python, USE_DL_IMPORT is implicitly defined in PC/config.h when USE_DL_EXPORT is not. Explicitly defining USE_DL_IMPORT will cause the macro redefinition warning indicated above -- but no other ill or good effect. Another way to solve your problem without using the "/export:init_sre" link option is by patching PC/config.h with the attached. When I was converting Cygwin Python to use a DLL core instead of a static library one, I wondered why the USE_DL_IMPORT case was missing the following: #define DL_EXPORT(RTYPE) __declspec(dllexport) RTYPE Anyway, sorry that I caused you some heartache. Jason P.S. If this patch is to be seriously considered, then the analogous change should be done for the other Win32 compilers (e.g. Borland). -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com -------------- next part -------------- Index: config.h =================================================================== RCS file: /cvsroot/python/python/dist/src/PC/config.h,v retrieving revision 1.49 diff -u -r1.49 config.h --- config.h 2001/02/28 08:15:16 1.49 +++ config.h 2001/02/28 19:16:52 @@ -118,6 +118,7 @@ #endif #ifdef USE_DL_IMPORT #define DL_IMPORT(RTYPE) __declspec(dllimport) RTYPE +#define DL_EXPORT(RTYPE) __declspec(dllexport) RTYPE #endif #ifdef USE_DL_EXPORT #define DL_IMPORT(RTYPE) __declspec(dllexport) RTYPE From Jason.Tishler at dothill.com Wed Feb 28 21:17:28 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Wed, 28 Feb 2001 15:17:28 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: ; from tim.one@home.com on Wed, Feb 28, 2001 at 12:36:45PM -0500 References: <20010228120229.M449@dothill.com> Message-ID: <20010228151728.Q449@dothill.com> Tim, On Wed, Feb 28, 2001 at 12:36:45PM -0500, Tim Peters wrote: > I checked that patch in already, about 15 minutes after you uploaded it. Is > this service, or what?! Yes! Thanks again. > [Guido] > > That patch seems fine -- except that I'd like /F to have a quick look > > since it changes _sre.c. > > Too late and no need. What Jason did to _sre.c is *undo* some Cygwin > special-casing; Not really -- I was trying to get rid of WIN32 #ifdefs. My solution was to attempt to reuse the DL_EXPORT macro. Now I realize that I should have done the following instead: #if defined(WIN32) || defined(__CYGWIN__) __declspec(dllexport) #endif > /F will like that. Apparently not. > It's trivial anyway. I thought so too. > Jason, about this: > > However, using the next Cygwin gcc (i.e., 2.95.2-8 or later) will > require one to configure with: > > CC='gcc -mwin32' configure ... > > How can we make that info *useful* to people? I have posted to the Cygwin mailing list and C.L.P regarding my original 2.0 patches. I have also continue to post to Cygwin regarding 2.1a1 and 2.1a2. I intended to do likewise for 2.1b1, etc. > The target audience for the > Cygwin port probably doesn't search Python-Dev or the Python patches > database. Agreed -- the above was only offered to the curious Python-Dev person and not for archival purposes. > So it would be good if you thought about uploading an > informational patch to README and Misc/NEWS briefly telling Cygwin folks what > they need to know. If you do, I'll look for it and check it in. I will submit a patch to README to add a Cygwin section to "Platform specific notes". Unfortunately, I don't think that I can squeeze it in by 2.1b1. If not, then I will submit it for the next release (2.1b2 or 2.1 final). I also don't mind waiting for the Cygwin gcc stuff to settle down too. I know...excuses, excuses... Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From tim.one at home.com Wed Feb 28 23:12:47 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:12:47 -0500 Subject: [Python-Dev] test_inspect.py still fails under -O In-Reply-To: Message-ID: > python -O ../lib/test/test_inspect.py Traceback (most recent call last): File "../lib/test/test_inspect.py", line 172, in ? 'trace() row 1') File "../lib/test/test_inspect.py", line 70, in test raise TestFailed, message % args test_support.TestFailed: trace() row 1 > git.tr[0][1:] is ('@test', 8, 'spam', ['def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h):\n'], 0) at this point. The test expects it to be ('@test', 9, 'spam', [' eggs(b + d, c + f)\n'], 0) Test passes without -O. This was on Windows. Other platforms? From tim.one at home.com Wed Feb 28 23:21:02 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:21:02 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: <20010228151728.Q449@dothill.com> Message-ID: [Jason Tishler] > ... > Not really -- I was trying to get rid of WIN32 #ifdefs. My solution was > to attempt to reuse the DL_EXPORT macro. Now I realize that I should > have done the following instead: > > #if defined(WIN32) || defined(__CYGWIN__) > __declspec(dllexport) > #endif Na, you did good! If /F wants to bark at someone, he should bark at me, because I reviewed the patch carefully before checking it in and it's the same thing I would have done. MarkH and I have long-delayed plans to change these macro schemes to make some sense, and the existing DL_EXPORT uses-- no matter how useless now --will be handy to look for when we change the appropriate ones to, e.g., DL_MODULE_ENTRY_POINT macros (that always expand to the correct platform-specific decl gimmicks). _sre.c was the oddball here. > ... > I will submit a patch to README to add a Cygwin section to "Platform > specific notes". Unfortunately, I don't think that I can squeeze it in > by 2.1b1. If not, then I will submit it for the next release (2.1b2 or 2.1 > final). I also don't mind waiting for the Cygwin gcc stuff to settle > down too. I know...excuses, excuses... That's fine. You know the Cygwin audience better than I do -- as I've proved beyond reasonable doubt several times . And thank you for your Cygwin work -- someday I hope to use Cygwin for more than just running "patch" on this box ... From martin at loewis.home.cs.tu-berlin.de Wed Feb 28 23:19:13 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Wed, 28 Feb 2001 23:19:13 +0100 Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax Message-ID: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> PEP 236 states that the intention of the proposed feature is to allow modules "to request that the code in module M use the new syntax or semantics in the current release C". It achieves this by introducing a new statement, the future_statement. This looks like an import statement, but isn't. The PEP author admits that 'overloading "import" does suck'. I agree (not surprisingly, since Tim added this QA item after we discussed it in email). It also says "But if we introduce a new keyword, that in itself would break old code". Here I disagree, and I propose patch 404997 as an alternative (https://sourceforge.net/tracker/index.php?func=detail&aid=404997&group_id=5470&atid=305470) In essence, with that patch, you would write directive nested_scopes instead of from __future__ import nested_scopes This looks like as it would add a new keyword directive, and thus break code that uses "directive" as an identifier, but it doesn't. In this release, "directive" is only a keyword if it is the first keyword in a file (i.e. potentially after a doc string, but not after any other keyword). So class directive: def __init__(self, directive): self.directive = directive continues to work as it did in previous releases (it does not even produce a warning, but could if desired). Only when you do directive nested_scopes directive braces class directive: def __init__(self, directive): self.directive = directive you get a syntax error, since "directive" is then a keyword in that module. The directive statement has a similar syntax to the C #pragma "statement", in that each directive has a name and an optional argument. The choice of the keyword "directive" is somewhat arbitrary; it was deliberately not "pragma", since that implies an implementation-defined semantics (which directive does not have). In terms of backwards compatibility, it behaves similar to "from __future__ import ...": older releases will give a SyntaxError for the directive syntax (instead of an ImportError, which a __future__ import will give). "Unknown" directives will also give a SyntaxError, similar to the ImportError from the __future__ import. Please let me know what you think. If you think this should be written down in a PEP, I'd request that the specification above is added into PEP 236. Regards, Martin From fredrik at effbot.org Wed Feb 28 23:42:56 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Wed, 28 Feb 2001 23:42:56 +0100 Subject: [Python-Dev] test_inspect.py still fails under -O References: Message-ID: <06c501c0a1d7$cdd346f0$e46940d5@hagrid> tim wrote: > git.tr[0][1:] is > > ('@test', 8, 'spam', > ['def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h):\n'], > 0) > > at this point. The test expects it to be > > ('@test', 9, 'spam', > [' eggs(b + d, c + f)\n'], > 0) > > Test passes without -O. the code doesn't take LINENO optimization into account. tentative patch follows: Index: Lib/inspect.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/inspect.py,v retrieving revision 1.2 diff -u -r1.2 inspect.py --- Lib/inspect.py 2001/02/28 08:26:44 1.2 +++ Lib/inspect.py 2001/02/28 22:35:49 @@ -561,19 +561,19 @@ filename = getsourcefile(frame) if context > 0: - start = frame.f_lineno - 1 - context/2 + start = _lineno(frame) - 1 - context/2 try: lines, lnum = findsource(frame) start = max(start, 1) start = min(start, len(lines) - context) lines = lines[start:start+context] - index = frame.f_lineno - 1 - start + index = _lineno(frame) - 1 - start except: lines = index = None else: lines = index = None - return (filename, frame.f_lineno, frame.f_code.co_name, lines, index) + return (filename, _lineno(frame), frame.f_code.co_name, lines, index) def getouterframes(frame, context=1): """Get a list of records for a frame and all higher (calling) frames. @@ -614,3 +614,26 @@ def trace(context=1): """Return a list of records for the stack below the current exception.""" return getinnerframes(sys.exc_traceback, context) + +def _lineno(frame): + # Coded by Marc-Andre Lemburg from the example of PyCode_Addr2Line() + # in compile.c. + # Revised version by Jim Hugunin to work with JPython too. + # Adapted for inspect.py by Fredrik Lundh + + lineno = frame.f_lineno + + c = frame.f_code + if not hasattr(c, 'co_lnotab'): + return tb.tb_lineno + + tab = c.co_lnotab + line = c.co_firstlineno + stopat = frame.f_lasti + addr = 0 + for i in range(0, len(tab), 2): + addr = addr + ord(tab[i]) + if addr > stopat: + break + line = line + ord(tab[i+1]) + return line Cheers /F From tim.one at home.com Wed Feb 28 23:42:16 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:42:16 -0500 Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax In-Reply-To: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> Message-ID: [Martin v. Loewis] > ... > If you think this should be written down in a PEP, Yes. > I'd request that the specification above is added into PEP 236. No -- PEP 236 is not a general directive PEP, no matter how much that what you *want* is a general directive PEP. I'll add a Q/A pair to 236 about why it's not a general directive PEP, but that's it. PEP 236 stands on its own for what it's designed for; your PEP should stand on its own for what *it's* designed for (which isn't nested_scopes et alia, it's character encodings). (BTW, there is no patch attached to patch 404997 -- see other recent msgs about people having problems uploading files to SF; maybe you could just put a patch URL in a comment now?] From fredrik at effbot.org Wed Feb 28 23:49:57 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Wed, 28 Feb 2001 23:49:57 +0100 Subject: [Python-Dev] test_inspect.py still fails under -O References: <06c501c0a1d7$cdd346f0$e46940d5@hagrid> Message-ID: <071401c0a1d8$c830e7b0$e46940d5@hagrid> I wrote: > + lineno = frame.f_lineno > + > + c = frame.f_code > + if not hasattr(c, 'co_lnotab'): > + return tb.tb_lineno that "return" statement should be: return lineno Cheers /F From guido at digicool.com Wed Feb 28 23:48:51 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 17:48:51 -0500 Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax In-Reply-To: Your message of "Wed, 28 Feb 2001 23:19:13 +0100." <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> References: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> Message-ID: <200102282248.RAA31007@cj20424-a.reston1.va.home.com> Martin, this looks nice, but where's the patch? (Not in the patch mgr.) We're planning the b1 release for Friday -- in two days. We need some time for our code base to stabilize. There's one downside to the "directive" syntax: other tools that parse Python will have to be adapted. The __future__ hack doesn't need that. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Wed Feb 28 23:52:33 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:52:33 -0500 Subject: [Python-Dev] Very recent test_global failure Message-ID: Windows. > python ../lib/test/regrtest.py test_global test_global :2: SyntaxWarning: name 'a' is assigned to before global declaration :2: SyntaxWarning: name 'b' is assigned to before global declaration The actual stdout doesn't match the expected stdout. This much did match (between asterisk lines): ********************************************************************** test_global ********************************************************************** Then ... We expected (repr): 'got SyntaxWarning as e' But instead we got: 'expected SyntaxWarning' test test_global failed -- Writing: 'expected SyntaxWarning', expected: 'got SyntaxWarning as e' 1 test failed: test_global > From jeremy at alum.mit.edu Wed Feb 28 23:40:05 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 17:40:05 -0500 (EST) Subject: [Python-Dev] Very recent test_global failure In-Reply-To: References: Message-ID: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> Just fixed. Guido's new, handy-dandy warning helper for the compiler checks for a warning that has been turned into an error. If the warning becomes an error, the SyntaxWarning is replaced with a SyntaxError. The change broke this test, but was otherwise a good thing. It allows reasonable tracebacks to be produced. Jeremy From jeremy at alum.mit.edu Wed Feb 28 23:48:15 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 17:48:15 -0500 (EST) Subject: [Python-Dev] Very recent test_global failure In-Reply-To: References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> Oops. Missed a checkin to symtable.h. unix-users-prepare-to-recompile-everything-ly y'rs, Jeremy From fred at digicool.com Wed Feb 28 23:35:46 2001 From: fred at digicool.com (Fred L. Drake, Jr.) Date: Wed, 28 Feb 2001 17:35:46 -0500 (EST) Subject: [Python-Dev] Re: puzzled about old checkin to pythonrun.c In-Reply-To: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15005.32066.814181.946890@localhost.localdomain> Jeremy Hylton writes: > You made a change to the syntax error generation code last August. > I don't understand what the code is doing. It appears that the code > you added is redundant, but it's hard to tell for sure because > responsbility for generating well-formed SyntaxErrors is spread > across several files. This is probably the biggest reason it's taken so long to get things into the ballpark! > The code you added in pythonrun.c, line 1084, in err_input(), starts > with the test (v != NULL): I've ripped all that out. > Can you shed any light? Was this all the light you needed? Or was there something deeper that I'm missing? -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From moshez at zadka.site.co.il Thu Feb 1 14:17:53 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Thu, 1 Feb 2001 15:17:53 +0200 (IST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include - really begs for a PEP In-Reply-To: References: Message-ID: <20010201131753.C8CB1A840@darjeeling.zadka.site.co.il> On Thu, 1 Feb 2001 03:31:33 -0800 (PST), Ka-Ping Yee wrote: [about for (k, v) in dict.iteritems(): ] > I remember considering this solution when i was writing the PEP. > The problem with it is that it isn't backward-compatible. It won't > work on existing dictionary-like objects -- it just introduces > another method that we then have to go back and implement on everything, > which kind of defeats the point of the whole proposal. Well, in that case we have differing views on the point of the whole proposal. I won't argue -- I think all the opinions have been aired, and it should be pronounced upon. > The other problem with this is that it isn't feasible in practice > unless 'for' can magically detect when the thing is a sequence and > when it's an iterator. I don't see any obvious solution to this dict.iteritems() could return not an iterator, but a magical object whose iterator is the requested iterator. Ditto itervalues(), iterkeys() -- Moshe Zadka This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From jeremy at alum.mit.edu Thu Feb 1 17:21:30 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 1 Feb 2001 11:21:30 -0500 (EST) Subject: [Python-Dev] any opinion on 'make quicktest'? Message-ID: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> I run the regression test a lot. I have found that it is often useful to exclude some of the slowest tests for most of the test runs and then do a full test run before I commit changes. Would anyone be opposed to a quicktest target in the Makefile that supports this practice? There are a small number of tests that each take at least 10 seconds to complete. Jeremy Index: Makefile.pre.in =================================================================== RCS file: /cvsroot/python/python/dist/src/Makefile.pre.in,v retrieving revision 1.8 diff -c -r1.8 Makefile.pre.in *** Makefile.pre.in 2001/01/29 20:18:59 1.8 --- Makefile.pre.in 2001/02/01 16:19:37 *************** *** 472,477 **** --- 472,484 ---- -PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(TESTOPTS) PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(TESTOPTS) + QUICKTESTOPTS= $(TESTOPTS) -x test_thread test_signal test_strftime \ + test_unicodedata test_re test_sre test_select test_poll + quicktest: all platform + -rm -f $(srcdir)/Lib/test/*.py[co] + -PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) + PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) + # Install everything install: altinstall bininstall maninstall From greg at cosc.canterbury.ac.nz Thu Feb 1 00:21:04 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 01 Feb 2001 12:21:04 +1300 (NZDT) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include - really begs for a PEP In-Reply-To: <14968.16962.830739.920771@anthem.wooz.org> Message-ID: <200101312321.MAA03263@s454.cosc.canterbury.ac.nz> barry at digicool.com (Barry A. Warsaw): > for key in dict.iterator(KEYS) > for value in dict.iterator(VALUES) > for key, value in dict.iterator(ITEMS) Yuck. I don't like any of this "for x in y.iterator_something()" stuff. The things you're after aren't "in" the iterator, they're "in" the dict. I don't want to know that there are iterators involved. We seem to be coming up with more and more convoluted ways to say things that should be very straightforward. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Thu Feb 1 00:25:54 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 31 Jan 2001 18:25:54 -0500 Subject: [Python-Dev] Making mutable objects readonly In-Reply-To: <200101301500.KAA25733@cj20424-a.reston1.va.home.com> Message-ID: [Ping] > Is a frozen list hashable? [Guido] > Yes -- that's what started this thread (using dicts as dict keys, > actually). Except this doesn't actually work unless list.freeze() recursively ensures that all elements in the list are frozen too: >>> hash((1, 2)) 219750523 >>> hash((1, [2])) Traceback (most recent call last): File " ", line 1, in ? TypeError: unhashable type >>> That bothered me in Eric's original suggestion: unless x.freeze() does a traversal of all objects reachable from x, it doesn't actually make x safe against modification (except at the very topmost level). But doing such a traversal isn't what *everyone* would want either (as with "const" in C, I expect the primary benefit would be the chance to spend countless hours worming around it in both directions ). [Skip] > If you want immutable dicts or lists in order to use them as > dictionary keys, just serialize them first: > > survey_says = {"spam": 14, "eggs": 42} > sl = marshal.dumps(survey_says) > dict[sl] = "spam" marshal.dumps(dict) isn't canonical, though. That is, it may well be that d1 == d2 but dumps(d1) != dumps(d2). Even materializing dict.values(), then sorting it, then marshaling *that* isn't enough; e.g., consider {1: 1} and {1: 1L}. The latter example applies to marshaling lists too. From greg at cosc.canterbury.ac.nz Thu Feb 1 00:34:50 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 01 Feb 2001 12:34:50 +1300 (NZDT) Subject: [Python-Dev] Making mutable objects readonly In-Reply-To: <14968.14631.419491.440774@beluga.mojam.com> Message-ID: <200101312334.MAA03267@s454.cosc.canterbury.ac.nz> Skip Montanaro : > Can someone give me an example where this is actually useful and > can't be handled through some existing mechanism? I can envisage cases where you want to build a data structure incrementally, and then treat it as immutable so you can use it as a dict key, etc. There's currently no way to do that to a list without copying it. So, it could be handy to have a way of turning a list into a tuple in-place. It would have to be a one-way transformation, otherwise you could start using it as a dict key, make it mutable again, and cause havoc. Suggested implementation: When you allocate the space for the values of a list, leave enough room for the PyObject_HEAD of a tuple at the beginning. Then you can turn that memory block into a real tuple later, and flag the original list object as immutable so you can't change it later via that route. Hmmm, would waste a bit of space for each list object. Maybe this should be a special list-about-to-become-tuple type. (Tist? Luple?) Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Thu Feb 1 00:36:48 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 31 Jan 2001 18:36:48 -0500 Subject: [Python-Dev] RE: [Patch #103203] PEP 205: weak references implementation In-Reply-To: Message-ID: > Patch #103203 has been updated. > > Project: python > Category: core (C code) > Status: Open > Submitted by: fdrake > Assigned to : tim_one > Summary: PEP 205: weak references implementation Fred, just noticed the new "assigned to". If you don't think it's a disaster(*), check it in! That will force more eyeballs on it quickly, and the quicker the better. I'm simply not going to do a decent review quickly on something this large starting cold. More urgently, I've been working long hours every day for several weeks, and need a break so I don't screw up last-second crises tomorrow. has-12-hours-of-taped-professional-wrestling-to-catch-up-on-ly y'rs - tim (*) otoh, if you do think it's a disaster, withdraw it for 2.1. From greg at cosc.canterbury.ac.nz Thu Feb 1 00:54:45 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 01 Feb 2001 12:54:45 +1300 (NZDT) Subject: [Python-Dev] Generator protocol? (Re: Sets: elt in dict, lst.include) In-Reply-To: <20010131063007.536ACA83E@darjeeling.zadka.site.co.il> Message-ID: <200101312354.MAA03272@s454.cosc.canterbury.ac.nz> Moshe Zadka : > Tim's "try to use that to write something that > will return the nodes of a binary tree" still haunts me. Instead of an iterator protocol, how about a generator protocol? Now that we're getting nested scopes, it should be possible to arrange it so that for x in thing: ...stuff... gets compiled as something like def _body(x): ...stuff... thing.__generate__(_body) (Actually it would be more complicated than that - for backward compatibility you'd want a new bytecode that would look for a __generator__ attribute and emulate the old iteration protocol otherwise.) Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From greg at cosc.canterbury.ac.nz Thu Feb 1 00:57:39 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 01 Feb 2001 12:57:39 +1300 (NZDT) Subject: [Python-Dev] codecity.com In-Reply-To: <200101310521.AAA31653@cj20424-a.reston1.va.home.com> Message-ID: <200101312357.MAA03275@s454.cosc.canterbury.ac.nz> > Should I spread this word, or is this a joke? I'm not sure what answering trivia questions has to do with the stated intention of "teaching jr. programmers how to write code". Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From greg at cosc.canterbury.ac.nz Thu Feb 1 00:59:33 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 01 Feb 2001 12:59:33 +1300 (NZDT) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <200101310049.TAA30197@cj20424-a.reston1.va.home.com> Message-ID: <200101312359.MAA03278@s454.cosc.canterbury.ac.nz> Guido van Rossum : > But it *is* true that coroutines are a very attractice piece of land > "just nextdoor". Unfortunately there's a big high fence in between topped with barbed wire and patrolled by vicious guard dogs. :-( Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From jeremy at alum.mit.edu Thu Feb 1 01:36:11 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 31 Jan 2001 19:36:11 -0500 (EST) Subject: [Python-Dev] rethinking import-related syntax errors In-Reply-To: <200101302042.PAA29301@cj20424-a.reston1.va.home.com> References: <20010130075515.X962@xs4all.nl> <200101301506.KAA25763@cj20424-a.reston1.va.home.com> <20010130165204.I962@xs4all.nl> <200101302042.PAA29301@cj20424-a.reston1.va.home.com> Message-ID: <14968.44923.774323.757343@w221.z064000254.bwi-md.dsl.cnc.net> I'd like to summarize the thread prompted by the compiler changes that implemented long-stated restrictions in the ref manual and ask a related question about backwards compatibility. The two changes were: 1. If a name is declared global in a function scope, it is an error to import with that name as a target. Example: def foo(): global string import string # error 2. It is illegal to use 'from ... import *' in a function. Example: def foo(): from string import * I believe Guido's recommendation about these two rules are: 1. Allow it, even though it dodgy style. A two-stager would be clearer: def foo(): global string import string as string_mod string = string_mod 2. Keep the restriction, because it's really bad style. It can also cause subtle problems with nested scopes. Example: def f(): from string import * def g(): return strip .... It might be reasonable to expect that strip would refer to the binding introduced by "from string import *" but there is no reasonable way to support this. The other issue raised was the two extra arguments to new.code(). I'll move those to the end and make them optional. The related question is whether I should worry about backwards compatibility at the C level. PyFrame_New(), PyFunction_New(), and PyCode_New() all have different signatures. Should I do anything about this? Jeremy From pedroni at inf.ethz.ch Thu Feb 1 02:42:08 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Thu, 1 Feb 2001 02:42:08 +0100 Subject: [Python-Dev] weak refs and jython Message-ID: <004101c08bf0$3158f7e0$de5821c0@newmexico> [Maybe this a 2nd copy of the message, sorry] Hi. [Fred L. Drake, Jr.] > > Java weak refs cannot be resurrected. > > This is certainly annoying. > > How about this: the callback receives the weak reference object or > > proxy which it was registered on as a parameter. Since the reference > > has already been cleared, there's no way to get the object back, so we > > don't need to get it from Java either. > > Would that be workable? (I'm adjusting my patch now.) Yes, it is workable: clearly we can implement weak refs only under java2 but this is not (really) an issue. We can register the refs in a java reference queue, and poll it lazily or trough a low-priority thread in order to invoke the callbacks. -- Some remarks I have used java weak/soft refs to implement some of the internal tables of jython in order to avoid memory leaks, at least under java2. I imagine that the idea behind callbacks plus resurrection was to enable the construction of sofisticated caches. My intuition is that these features are not present under java because they will interfere too much with gc and have a performance penalty. On the other hand java offers reference queues and soft references, the latter cover the common case of caches that should be cleared when there is few memory left. (Never tried them seriously, so I don't know if the actual impl is fair, or will just wait too much starting to discard things => behavior like primitives gc). The main difference I see between callbacks and queues approach is that with queues is this left to the user when to do the actual cleanup of his tables/caches, and handling queues internally has a "low" overhead. With callbacks what happens depends really on the collection times/patterns and the overhead is related to call overhead and how much is non trivial, what the user put in the callbacks. Clearly general performance will not be easily predictable. (From a theoretical viewpoint one can simulate more or less queues with callbacks and the other way around). Resurrection makes few sense with queues, but I can easely see that lacking of both resurrection and soft refs limits what can be done with weak-like refs. Last thing: one of the things that is really missing in java refs features is that one cannot put conditions of the form as long A is not collected B should not be collected either. Clearly I'm referring to situation when one cannot modify the class of A in order to add a field, which is quite typical in java. This should not be a problem with python and its open/dynamic way-of-life. regards, Samuele Pedroni. > From ping at lfw.org Thu Feb 1 12:31:33 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 1 Feb 2001 03:31:33 -0800 (PST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include - really begs for a PEP In-Reply-To: <14968.16962.830739.920771@anthem.wooz.org> Message-ID: Moshe Zadka wrote: > Basic response: I *love* the iter(), sq_iter and __iter__ > parts. I tremble at seeing the rest. Why not add a method to > dictionaries .iteritems() and do > > for (k, v) in dict.iteritems(): > pass > > (dict.iteritems() would return an an iterator to the items) Barry Warsaw wrote: > Moshe, I had exactly the same reaction and exactly the same idea. I'm > a strong -1 on introducing new syntax for this when new methods can > handle it in a much more readable way (IMO). I remember considering this solution when i was writing the PEP. The problem with it is that it isn't backward-compatible. It won't work on existing dictionary-like objects -- it just introduces another method that we then have to go back and implement on everything, which kind of defeats the point of the whole proposal. (One of the Big Ideas is to let the 'for' syntax mean "just do whatever you have to do to iterate" and we let it worry about the details.) The other problem with this is that it isn't feasible in practice unless 'for' can magically detect when the thing is a sequence and when it's an iterator. I don't see any obvious solution to this (aside from "instead of an iterator, implement a whole sequence-like object using the __getitem__ protocol" -- and then we'd be back to square one). I personally find this: for key:value in dict: much clearer than either of these: for (k, v) in dict.iteritems(): for key, value in dict.iterator(ITEMS): There's less to read and less punctuation in the first, and there's a natural parallel: seq = [1, 4, 7] for item in seq: ... dict = {2:3, 4:5} for key:value in dict: ... -- ?!ng Two links diverged in a Web, and i -- i took the one less travelled by. -- with apologies to Robert Frost From thomas at xs4all.net Thu Feb 1 08:55:01 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Thu, 1 Feb 2001 08:55:01 +0100 Subject: [Python-Dev] Re: rethinking import-related syntax errors In-Reply-To: <14968.44923.774323.757343@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Wed, Jan 31, 2001 at 07:36:11PM -0500 References: <20010130075515.X962@xs4all.nl> <200101301506.KAA25763@cj20424-a.reston1.va.home.com> <20010130165204.I962@xs4all.nl> <200101302042.PAA29301@cj20424-a.reston1.va.home.com> <14968.44923.774323.757343@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010201085501.K922@xs4all.nl> On Wed, Jan 31, 2001 at 07:36:11PM -0500, Jeremy Hylton wrote: > I believe Guido's recommendation about these two rules are: > 1. Allow it, even though it dodgy style. A two-stager would be > clearer: > def foo(): > global string > import string as string_mod > string = string_mod I don't think it's dodgy style, and I don't think a two-stager would be clearer, since the docs always claim 'importing is just another assignment statement'. The whole 'import-as' was added to *avoid* these two-stagers! Furthermore, since 'global string;import string' worked correctly at least since Python 1.5 and probably much longer, I suspect it'll break some code and confuse some more programmers out there. To handle this 'portably' (between Python versions, because lets be honest: Python 2.0 is far from common right now, and I can't blame people for not upgrading with the licence issues and all), the programmer would have to do def assign_global_string(name): global string string = name def foo(): import string assign_global_string(name) or even def foo(): def assign_global_string(name): global string string = name import string assign_global_string(name) (Keeping in mind nested scopes, what would *you* expect the last one to do ?) I honestly think def foo(): global string import string is infinitely clearer. > 2. Keep the restriction, because it's really bad style. It can > also cause subtle problems with nested scopes. Example: > def f(): > from string import * > def g(): > return strip > .... > It might be reasonable to expect that strip would refer to the > binding introduced by "from string import *" but there is no > reasonable way to support this. I'm still not entirely comfortable with disallowing this (rewriting code that uses it would be a pain, especially large functions) but I have good hopes that this won't be necessary because nothing large uses this :) Still, it would be nice if the compiler would only barf if someone uses 'from ... import *' in a local scope *and* references unbound names in a nested scope. I can see how that would be a lot of trouble for a little bit of gain, though. > The related question is whether I should worry about backwards > compatibility at the C level. PyFrame_New(), PyFunction_New(), and > PyCode_New() all have different signatures. Should I do anything > about this? Well, it could be done, maybe renaming the functions and doing something like #ifdef OLD_CODE_CREATION #define PyFrame_New PyFrame_OldNew ... etc, to allow quick porting to Python 2.1. I have never seen C code create code/function/frame objects by itself, though, so I'm not sure if it's worth it. The Python bit is, since it's a lot less trouble to fix it and a lot more common to use the 'new' object. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From fdrake at acm.org Thu Feb 1 18:08:49 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 1 Feb 2001 12:08:49 -0500 (EST) Subject: [Python-Dev] Re: 2.1a2 release issues; mail.python.org still down In-Reply-To: <3A798F14.D389A4A9@lemburg.com> References: <14969.31398.706875.540775@w221.z064000254.bwi-md.dsl.cnc.net> <3A798F14.D389A4A9@lemburg.com> Message-ID: <14969.38945.771075.55993@cj42289-a.reston1.va.home.com> [Pushing this to python-dev w/out M-A's permission, now that mail is starting to flow again.] M.-A. Lemburg writes: > Another issue: importing old extensions now causes a core dump > due to the new slots for weak refs beind written to. I think(!) this should only affect really modules from 1.5.? and earlier; type objects compiled after tp_xxx7/tp_xxx8 were added *should not* have a problem with this. You don't give enough information for me to be sure. Please let me know more if I'm wrong (possible!). The only way I can see that there would be a problem like this is if the type object contains a positive value for the tp_weaklistoffset field (formerly tp_xxx8). > Solution: in addition to printing a warning, the _PyModule_Init() > APIs should ignore all modules having an API level < 1010. For the specific problem you mention, we could add a type flag (Py_TPFLAGS_HAVE_WEAKREFS) that could be tested; it would be set in Py_TPFLAGS_DEFAULT. On the other hand, I'd be perfectly happy to "ignore" modules with the older C API version (especially if "ignore" lets me call Py_FatalError()!). The API version changed because of the changes to the function signatures of PyCode_New() and PyFrame_New(); these both require additional parameters in API version 1010. -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From skip at mojam.com Thu Feb 1 18:33:32 2001 From: skip at mojam.com (Skip Montanaro) Date: Thu, 1 Feb 2001 11:33:32 -0600 (CST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: References: <14968.37210.886842.820413@beluga.mojam.com> Message-ID: <14969.40428.977831.274322@beluga.mojam.com> >> What would break if we decided to simply add __getitem__ (and other >> sequence methods) to list object's method table? Ping> That would work for lists, but not for any extension types that Ping> use the sq_* protocol to behave like sequences. Could extension writers add those methods to their modules? I know I'm really getting off-topic here, but the whole visible interface idea crops up from time-to-time. I guess I'm just nibbling around the edges a bit to try and understand the problem better. Skip From jeremy at alum.mit.edu Thu Feb 1 20:04:10 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 1 Feb 2001 14:04:10 -0500 (EST) Subject: [Python-Dev] insertdict slower? Message-ID: <14969.45866.143924.870843@w221.z064000254.bwi-md.dsl.cnc.net> I was curious about what the DictCreation microbenchmark in pybench was slower (about 15%) with 2.1 than with 2.0. I ran both with profiling enabled (-pg, no -O) and see that insertdict is a fair bit slower in 2.1. Anyone with dict implementation expertise want to hazard a guess about this? The profiler indicates the insertdict() is about 30% slower in 2.1, when the keys are all ints. int_hash() isn't any slower, but dict_ass_sub() is about 50% slower. Of course, this is a microbenchmark that focuses on one tiny corner of dictionary usage: creating dictionaries with integer keys. This may not be a very useful measure of dictionary performance. Jeremy Results for Python 2.0 Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 54.55 3.90 3.90 285 13.68 19.25 eval_code2 6.71 4.38 0.48 4500875 0.00 0.00 lookdict 5.17 4.75 0.37 3000299 0.00 0.00 dict_dealloc 5.03 5.11 0.36 4506429 0.00 0.00 PyDict_SetItem 3.78 5.38 0.27 4500170 0.00 0.00 PyObject_SetItem 2.94 5.59 0.21 1500670 0.00 0.00 dictresize 2.80 5.79 0.20 4513037 0.00 0.00 insertdict 2.52 5.97 0.18 3000333 0.00 0.00 PyDict_New 2.38 6.14 0.17 4510126 0.00 0.00 PyObject_Hash 2.38 6.31 0.17 4500459 0.00 0.00 int_hash 2.24 6.47 0.16 3006844 0.00 0.00 gc_list_append 2.10 6.62 0.15 4500115 0.00 0.00 dict_ass_sub 1.68 6.74 0.12 3006759 0.00 0.00 gc_list_remove 1.68 6.86 0.12 3001745 0.00 0.00 PyObject_Init 1.26 6.95 0.09 3005413 0.00 0.00 _PyGC_Insert Results for Python 2.1 Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 50.00 3.83 3.83 998 3.84 3.84 eval_code2 6.40 4.32 0.49 4520965 0.00 0.00 lookdict 4.70 4.68 0.36 4519083 0.00 0.00 PyDict_SetItem 4.70 5.04 0.36 3001756 0.00 0.00 dict_dealloc 4.18 5.36 0.32 4500441 0.00 0.00 PyObject_SetItem 3.39 5.62 0.26 4531084 0.00 0.00 insertdict 3.00 5.85 0.23 4500354 0.00 0.00 dict_ass_sub 2.48 6.04 0.19 4507608 0.00 0.00 int_hash 2.35 6.22 0.18 4576793 0.00 0.00 PyObject_Hash 2.22 6.39 0.17 3003590 0.00 0.00 PyObject_Init 2.22 6.56 0.17 3002045 0.00 0.00 PyDict_New 2.22 6.73 0.17 1502861 0.00 0.00 dictresize 1.96 6.88 0.15 3023157 0.00 0.00 gc_list_remove 1.70 7.01 0.13 3020996 0.00 0.00 _PyGC_Remove 1.57 7.13 0.12 3023452 0.00 0.00 gc_list_append From mal at lemburg.com Thu Feb 1 18:43:52 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 01 Feb 2001 18:43:52 +0100 Subject: [Python-Dev] Re: 2.1a2 release issues; mail.python.org still down References: <14969.31398.706875.540775@w221.z064000254.bwi-md.dsl.cnc.net> <3A798F14.D389A4A9@lemburg.com> <14969.38945.771075.55993@cj42289-a.reston1.va.home.com> Message-ID: <3A79A058.772239C2@lemburg.com> "Fred L. Drake, Jr." wrote: > > M.-A. Lemburg writes: > > Another issue: importing old extensions now causes a core dump > > due to the new slots for weak refs beind written to. > > I think(!) this should only affect really modules from 1.5.? and > earlier; type objects compiled after tp_xxx7/tp_xxx8 were added > *should not* have a problem with this. You don't give enough > information for me to be sure. Please let me know more if I'm wrong > (possible!). I've only tested these using my mx tools compiled against 1.5 -- really old, I know, but I still actively use that version. tp_xxx7/8 were added in Python 1.5.2, I think, so writing to them causes the core dump. > The only way I can see that there would be a problem like this is if > the type object contains a positive value for the tp_weaklistoffset > field (formerly tp_xxx8). > > > Solution: in addition to printing a warning, the _PyModule_Init() > > APIs should ignore all modules having an API level < 1010. > > For the specific problem you mention, we could add a type flag > (Py_TPFLAGS_HAVE_WEAKREFS) that could be tested; it would be set in > Py_TPFLAGS_DEFAULT. That would work, but is it really worth it ? The APIs have changed considerably, so the fact that I got away with a warning in Python2.0 doesn't really mean anything -- I do have a problem now, though, since maintaining versions for 1.5, 1.5.2, 2.0 and 2.1 will be a pain :-/ > On the other hand, I'd be perfectly happy to "ignore" modules with > the older C API version (especially if "ignore" lets me call > Py_FatalError()!). The API version changed because of the changes to > the function signatures of PyCode_New() and PyFrame_New(); these both > require additional parameters in API version 1010. Py_FatalError() is a bit too harsh, I guess. Wouldn't it suffice to raise an ImportError exception and have Py_InitModule() return NULL in case a module with an incompatible API version is encountered ? BTW, what happened to the same problem on Windows ? Do users still get a seg fault ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From fdrake at acm.org Thu Feb 1 18:48:48 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 1 Feb 2001 12:48:48 -0500 (EST) Subject: [Python-Dev] Re: 2.1a2 release issues; mail.python.org still down In-Reply-To: <3A79A058.772239C2@lemburg.com> References: <14969.31398.706875.540775@w221.z064000254.bwi-md.dsl.cnc.net> <3A798F14.D389A4A9@lemburg.com> <14969.38945.771075.55993@cj42289-a.reston1.va.home.com> <3A79A058.772239C2@lemburg.com> Message-ID: <14969.41344.176815.821673@cj42289-a.reston1.va.home.com> M.-A. Lemburg writes: > I've only tested these using my mx tools compiled against 1.5 -- > really old, I know, but I still actively use that version. tp_xxx7/8 > were added in Python 1.5.2, I think, so writing to them causes > the core dump. Yep. I said: > For the specific problem you mention, we could add a type flag > (Py_TPFLAGS_HAVE_WEAKREFS) that could be tested; it would be set in > Py_TPFLAGS_DEFAULT. M-A replied: > That would work, but is it really worth it ? The APIs have changed > considerably, so the fact that I got away with a warning in Python2.0 No, which is why I'm happy to tell you to recomple your extensions. > doesn't really mean anything -- I do have a problem now, though, > since maintaining versions for 1.5, 1.5.2, 2.0 and 2.1 will > be a pain :-/ Unless you're using PyCode_New() or PyFrame_New(), recompiling the extension should be all you'll need -- unless you're pulling stunts like ExtensionClass does (defining a type-like object using an old definition of PyTypeObject). If any of the functions you're calling have changed signatures, you'll need to update them anyway. The weakref support will not cause you to change your code unless you want to be able to refer to your extension types via weak refs. > Py_FatalError() is a bit too harsh, I guess. Wouldn't it > suffice to raise an ImportError exception and have Py_InitModule() > return NULL in case a module with an incompatible API version is > encountered ? I suppose we could do that, but it'll take more than my agreement to make that happen. Guido seemed to think that few modules will be calling PyCode_New() and PyFrame_New() directly (pyexpat being the exception). -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From esr at thyrsus.com Thu Feb 1 19:00:57 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Thu, 1 Feb 2001 13:00:57 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include - really begs for a PEP In-Reply-To: <200101312321.MAA03263@s454.cosc.canterbury.ac.nz>; from greg@cosc.canterbury.ac.nz on Thu, Feb 01, 2001 at 12:21:04PM +1300 References: <14968.16962.830739.920771@anthem.wooz.org> <200101312321.MAA03263@s454.cosc.canterbury.ac.nz> Message-ID: <20010201130057.A12500@thyrsus.com> Greg Ewing : > Yuck. I don't like any of this "for x in y.iterator_something()" > stuff. The things you're after aren't "in" the iterator, they're > "in" the dict. I don't want to know that there are iterators > involved. I must say I agree. Having explicit iterators obfuscates what is going on, rather than clarifying it -- the details of how we get the next item should be hidden as far below the surface of the code as possible, so programmers don't have to think about them. The only cases I know of where an explicit iterator object is even semi-justified are those where there is substantial control state to be kept around between iterations and that state has to be visible to the application code (not the case with dictionaries or any other built-in type). In the cases where that *is* true (interruptible tree traversal being the paradigm example), we would be better served with Icon-style generators or a continuations facility a la Stackless Python. I'm a hard -1 on explicit iterator objects for built-in types. Let's keep it simple, guys. -- Eric S. Raymond The Constitution is not neutral. It was designed to take the government off the backs of the people. -- Justice William O. Douglas From mal at lemburg.com Thu Feb 1 19:05:22 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 01 Feb 2001 19:05:22 +0100 Subject: [Python-Dev] Benchmarking "fun" (was Re: Python 2.1 slower than 2.0) References: <3A78226B.2E177EFE@lemburg.com> <20010131220033.O962@xs4all.nl> Message-ID: <3A79A562.54682A39@lemburg.com> Thomas Wouters wrote: > > On Wed, Jan 31, 2001 at 03:34:19PM +0100, M.-A. Lemburg wrote: > > > I have made similar experience with -On with n>3 compared to -O2 > > using pgcc (gcc optimized for PC processors). BTW, the Linux > > kernel uses "-Wall -Wstrict-prototypes -O3 -fomit-frame-pointer" > > as CFLAGS -- perhaps Python should too on Linux ?! > > [...lots of useful tips about gcc compiler options...] Thanks for the useful details, Thomas. I guess on PC machines, -fomit-frame-pointer does have some use due to the restricted number of available registers. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Thu Feb 1 19:15:24 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 01 Feb 2001 19:15:24 +0100 Subject: [Python-Dev] Re: 2.1a2 release issues; mail.python.org still down References: <14969.31398.706875.540775@w221.z064000254.bwi-md.dsl.cnc.net> <3A798F14.D389A4A9@lemburg.com> <14969.38945.771075.55993@cj42289-a.reston1.va.home.com> <3A79A058.772239C2@lemburg.com> <14969.41344.176815.821673@cj42289-a.reston1.va.home.com> Message-ID: <3A79A7BC.58997544@lemburg.com> "Fred L. Drake, Jr." wrote: > > M.-A. Lemburg writes: > > I've only tested these using my mx tools compiled against 1.5 -- > > really old, I know, but I still actively use that version. tp_xxx7/8 > > were added in Python 1.5.2, I think, so writing to them causes > > the core dump. > > Yep. > > I said: > > For the specific problem you mention, we could add a type flag > > (Py_TPFLAGS_HAVE_WEAKREFS) that could be tested; it would be set in > > Py_TPFLAGS_DEFAULT. > > M-A replied: > > That would work, but is it really worth it ? The APIs have changed > > considerably, so the fact that I got away with a warning in Python2.0 > > No, which is why I'm happy to tell you to recomple your extensions. > > > doesn't really mean anything -- I do have a problem now, though, > > since maintaining versions for 1.5, 1.5.2, 2.0 and 2.1 will > > be a pain :-/ > > Unless you're using PyCode_New() or PyFrame_New(), recompiling the > extension should be all you'll need -- unless you're pulling stunts > like ExtensionClass does (defining a type-like object using an old > definition of PyTypeObject). If any of the functions you're calling > have changed signatures, you'll need to update them anyway. The > weakref support will not cause you to change your code unless you want > to be able to refer to your extension types via weak refs. The problem is not recompiling the extensions, it's that I will have to keep compiled versions around for all versions I have installed on my machine. > > Py_FatalError() is a bit too harsh, I guess. Wouldn't it > > suffice to raise an ImportError exception and have Py_InitModule() > > return NULL in case a module with an incompatible API version is > > encountered ? > > I suppose we could do that, but it'll take more than my agreement to > make that happen. Guido seemed to think that few modules will be > calling PyCode_New() and PyFrame_New() directly (pyexpat being the > exception). The warnings are at least as annoying as recompiling the extensions, even more since each and every imported extension will moan about the version difference ;-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From ping at lfw.org Thu Feb 1 19:21:12 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 1 Feb 2001 10:21:12 -0800 (PST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <200101312359.MAA03278@s454.cosc.canterbury.ac.nz> Message-ID: On Thu, 1 Feb 2001, Greg Ewing wrote: > Guido van Rossum : > > > But it *is* true that coroutines are a very attractice piece of land > > "just nextdoor". > > Unfortunately there's a big high fence in between topped with > barbed wire and patrolled by vicious guard dogs. :-( Perhaps you meant, lightly killed and topped with quintuple-smooth, treble milk chocolate? :) -- ?!ng "PS: tongue is firmly in cheek PPS: regrettably, that's my tongue in my cheek" -- M. H. From sdm7g at virginia.edu Thu Feb 1 20:22:35 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Thu, 1 Feb 2001 14:22:35 -0500 (EST) Subject: [Python-Dev] Case sensitive import. Message-ID: I see from one of the comments on my patch #103459 that there is a history to this issue (patch #103154) I had assumed that renaming modules and possibly breaking existing code was not an option, but this seems to have been considered in the discussion on that earlier patch. Is there any consensus on how to deal with this ? I would *really* like to get SOME fix -- either my patch, or a renaming of FCNTL, TERMIOS, SOCKET, into the next release. It's not clear to me whether the issues on other systems are the same. On mac-osx, the OS is BSD unix based and when using a unix file system, it's case sensitive. But the standard filesystem is Apple's HFS+, which is case preserving but case insensitive. ( That means that opening "abc" will succeed if there is a file named "abc", "ABC", "Abc" , "aBc" ... , but a directory listing will show "abc" ) I had guessed that the CHECK_IMPORT_CASE ifdefs and the corresponding configure switch were there for this sort of problem, and all I had to do was add a macosx implementation of check_case(), but returning false from check_case causes the search to fail -- it does not continue until it find a matching module. So it appears that I don't understand the issues on other platforms and what CHECK_IMPORT_CASE intends to fix. -- Steve Majewski From jeremy at alum.mit.edu Thu Feb 1 20:27:45 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 1 Feb 2001 14:27:45 -0500 (EST) Subject: [Python-Dev] python setup.py fails with illegal import (+ fix) In-Reply-To: <20010131200507.A106931E1AD@bireme.oratrix.nl> References: <20010131200507.A106931E1AD@bireme.oratrix.nl> Message-ID: <14969.47281.950974.882075@w221.z064000254.bwi-md.dsl.cnc.net> I checked in a different fix last night, which you have probably discovered now that python-dev is sending mail again. Jeremy From fdrake at acm.org Thu Feb 1 20:51:33 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 1 Feb 2001 14:51:33 -0500 (EST) Subject: [Python-Dev] any opinion on 'make quicktest'? In-Reply-To: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> References: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14969.48709.111307.650978@cj42289-a.reston1.va.home.com> Jeremy Hylton writes: > I run the regression test a lot. I have found that it is often useful > to exclude some of the slowest tests for most of the test runs and I think this would be nice. > + QUICKTESTOPTS= $(TESTOPTS) -x test_thread test_signal test_strftime \ > + test_unicodedata test_re test_sre test_select test_poll > + quicktest: all platform > + -rm -f $(srcdir)/Lib/test/*.py[co] > + -PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) > + PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) In fact, for this, I'd only run the test once and would skip the "rm" command as well. I usually just run the regression test once (but with all modules, to avoid the extra typing). -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From jeremy at alum.mit.edu Thu Feb 1 20:58:29 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 1 Feb 2001 14:58:29 -0500 (EST) Subject: [Python-Dev] any opinion on 'make quicktest'? In-Reply-To: <14969.48709.111307.650978@cj42289-a.reston1.va.home.com> References: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> <14969.48709.111307.650978@cj42289-a.reston1.va.home.com> Message-ID: <14969.49125.52032.638762@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "FLD" == Fred L Drake, writes: >> + QUICKTESTOPTS= $(TESTOPTS) -x test_thread test_signal >> test_strftime \ >> + test_unicodedata test_re test_sre test_select test_poll >> + quicktest: all platform >> + -rm -f $(srcdir)/Lib/test/*.py[co] >> + -PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) >> + PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(QUICKTESTOPTS) FLD> In fact, for this, I'd only run the test once and would skip the FLD> "rm" command as well. I usually just run the regression test FLD> once (but with all modules, to avoid the extra typing). Actually, I think the rm is important. I've spent most of the last month running make test to check the compiler. Jeremy From fdrake at acm.org Thu Feb 1 20:56:47 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 1 Feb 2001 14:56:47 -0500 (EST) Subject: [Python-Dev] any opinion on 'make quicktest'? In-Reply-To: <14969.49125.52032.638762@w221.z064000254.bwi-md.dsl.cnc.net> References: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> <14969.48709.111307.650978@cj42289-a.reston1.va.home.com> <14969.49125.52032.638762@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14969.49023.323038.923328@cj42289-a.reston1.va.home.com> Jeremy Hylton writes: > Actually, I think the rm is important. I've spent most of the last > month running make test to check the compiler. Yeah, but you're a special case. ;-) That's fine -- it's still much better than running the long version every time. -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From barry at digicool.com Thu Feb 1 21:22:38 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Thu, 1 Feb 2001 15:22:38 -0500 Subject: [Python-Dev] any opinion on 'make quicktest'? References: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14969.50574.964108.822920@anthem.wooz.org> >>>>> "JH" == Jeremy Hylton writes: JH> I run the regression test a lot. I have found that it is JH> often useful to exclude some of the slowest tests for most of JH> the test runs and then do a full test run before I commit JH> changes. Would anyone be opposed to a quicktest target in the JH> Makefile that supports this practice? There are a small JH> number of tests that each take at least 10 seconds to JH> complete. I'm strongly +1 on this, because I often run the test suite on an Insure'd executable. It takes a looonngg time for even the quick tests. -Barry From ping at lfw.org Thu Feb 1 17:58:43 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 1 Feb 2001 08:58:43 -0800 (PST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <14968.37210.886842.820413@beluga.mojam.com> Message-ID: On Wed, 31 Jan 2001, Skip Montanaro wrote: > What would break if we decided to simply add __getitem__ (and other sequence > methods) to list object's method table? Would they foul something up or > would simply sit around quietly waiting for hasattr to notice them? That would work for lists, but not for any extension types that use the sq_* protocol to behave like sequences. For now, anyway, we're stuck with the two separate protocols whether we like it or not. -- ?!ng Two links diverged in a Web, and i -- i took the one less travelled by. -- with apologies to Robert Frost From thomas at xs4all.net Thu Feb 1 23:30:48 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Thu, 1 Feb 2001 23:30:48 +0100 Subject: [Python-Dev] any opinion on 'make quicktest'? In-Reply-To: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Thu, Feb 01, 2001 at 11:21:30AM -0500 References: <14969.36106.386207.593290@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010201233048.R962@xs4all.nl> On Thu, Feb 01, 2001 at 11:21:30AM -0500, Jeremy Hylton wrote: > I run the regression test a lot. I have found that it is often useful > to exclude some of the slowest tests for most of the test runs and > then do a full test run before I commit changes. Would anyone be > opposed to a quicktest target in the Makefile that supports this > practice? There are a small number of tests that each take at least > 10 seconds to complete. Definately +1 here. On BSDI 4.0, which I try to test regularly, test_signal hangs (because of threading bugs in BSDI, nothing Python can solve) and test_select/test_poll either crash right away, or hang as well (same as with test_signal, but could be specific to the box I'm running it on.) So I've been forced to do it by hand. I'm not sure why I didn't automate it yet, but make quicktest would be very welcome :) > + QUICKTESTOPTS= $(TESTOPTS) -x test_thread test_signal test_strftime \ > + test_unicodedata test_re test_sre test_select test_poll -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From barry at digicool.com Thu Feb 1 23:35:25 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Thu, 1 Feb 2001 17:35:25 -0500 Subject: [Python-Dev] Benchmarking "fun" (was Re: Python 2.1 slower than 2.0) References: <3A7890AB.69B893F9@lemburg.com> Message-ID: <14969.58541.406274.212776@anthem.wooz.org> >>>>> "M" == M writes: M> Or do we have a 2.1 feature freeze already ? Strictly speaking, there is no feature freeze until the first beta is released. -Barry From jeremy at alum.mit.edu Thu Feb 1 23:39:25 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 1 Feb 2001 17:39:25 -0500 (EST) Subject: [Python-Dev] Benchmarking "fun" (was Re: Python 2.1 slower than 2.0) In-Reply-To: <3A7890AB.69B893F9@lemburg.com> References: <3A7890AB.69B893F9@lemburg.com> Message-ID: <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "MAL" == M -A Lemburg writes: MAL> Tim Peters wrote: >> >> [Michael Hudson] >> > ... Can anyone try this on Windows? Seeing as windows malloc >> > reputedly sucks, maybe the differences would be bigger. >> >> No time now (pymalloc is a non-starter for 2.1). Was tried in >> the past on Windows. Helped significantly. Unclear how much was >> simply due to exploiting the global interpreter lock, though. >> "Windows" is also a multiheaded beast (e.g., NT has very >> different memory performance characteristics than 95). MAL> We're still in alpha, no ? The last planned alpha is going to be released tonight or early tomorrow. I'm reluctant to add a large patch that I'm unfamiliar with in the last 24 hours before the release. MAL> Or do we have a 2.1 feature freeze already ? We aren't adding any major new features that haven't been PEPed. I'd like to see a PEP on this subject. Jeremy From greg at cosc.canterbury.ac.nz Thu Feb 1 23:45:02 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Fri, 02 Feb 2001 11:45:02 +1300 (NZDT) Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: Message-ID: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> Tim Peters : > The old type/class split: list is a type, and types spell their "method > tables" in ways that have little in common with how classes do it. Maybe as a first step towards type/class unification one day, we could add __xxx__ attributes to all the builtin types, and start to think of the method table as the definitive source of all methods, with the tp_xxx slots being a sort of cache for the most commonly used ones. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Fri Feb 2 07:44:58 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 01:44:58 -0500 Subject: [Python-Dev] Showstopper in import? Message-ID: Turns out IDLE no longer runs. Starting at line 88 of Tools/idle/EditorWindow.py we have this class defn: class EditorWindow: from Percolator import Percolator from ColorDelegator import ColorDelegator from UndoDelegator import UndoDelegator from IOBinding import IOBinding import Bindings from Tkinter import Toplevel from MultiStatusBar import MultiStatusBar about_title = about_title ... This leads to what looks like a bug (if we're to believe the error msg, which doesn't mean what it says): C:\Pyk>python tools/idle/idle.pyw Traceback (most recent call last): File "tools/idle/idle.pyw", line 2, in ? import idle File "C:\PYK\Tools\idle\idle.py", line 11, in ? import PyShell File "C:\PYK\Tools\idle\PyShell.py", line 15, in ? from EditorWindow import EditorWindow, fixwordbreaks File "C:\PYK\Tools\idle\EditorWindow.py", line 88, in ? class EditorWindow: File "C:\PYK\Tools\idle\EditorWindow.py", line 90, in EditorWindow from Percolator import Percolator SyntaxError: 'from ... import *' may only occur in a module scope Hit return to exit... C:\Pyk> Sorry for the delay in reporting this! I've had other problems with the Windows installer (all fixed now), and IDLE *normally* executes pythonw.exe on Windows, which tosses error msgs into a bit bucket. So all I knew was that IDLE "didn't come up", and took the high-probability guess that it was due to some other problem I was already tracking down. Lost that bet. From tim.one at home.com Fri Feb 2 07:47:59 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 01:47:59 -0500 Subject: [Python-Dev] Quick Unix work needed Message-ID: Trent Mick's C API testing framework has been checked in, along with everything needed to get it working on Windows: http://sourceforge.net/patch/?func=detailpatch&patch_id=101162& group_id=5470 It still needs someone to add it to the Unixish builds. You'll know that it worked if the new std test test_capi.py succeeds. From RoD at qnet20.com Thu Feb 1 23:23:59 2001 From: RoD at qnet20.com (Rod) Date: Thu, 1 Feb 2001 23:23:59 Subject: [Python-Dev] Diamond x Jungle Carpet Python Message-ID: <20010202072422.6B673F4DD@mail.python.org> I have several Diamond x Jungle Capret Pythons for SALE. Make me an offer.... Go to: www.qnet20.com From tim.one at home.com Fri Feb 2 08:34:07 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 02:34:07 -0500 Subject: [Python-Dev] insertdict slower? Message-ID: [Jeremy] > I was curious about what the DictCreation microbenchmark in > pybench was slower (about 15%) with 2.1 than with 2.0. I ran > both with profiling enabled (-pg, no -O) and see that insertdict > is a fair bit slower in 2.1. Anyone with dict implementation > expertise want to hazard a guess about this? You don't need to be an expert for this one: just look at the code! There's nothing to it, and not even a comment has changed in insertdict since 2.0. I don't believe the profile. There are plenty of other things to be suspicious about too (e.g., it showed 285 calls to eval_code2 in 2.0, but 998 in 2.1). So you're looking at a buggy profiler, a buggy profiling procedure, or a Cache Mystery (the catch-all excuse for anything that's incomprehensible without HW-level monitoring tools). WRT the latter, try inserting a renamed copy of insertdict before and after the existing one, and make them extern to discourage the compiler+linker from throwing them away. If the slowdown goes away, you're probably looking at an i-cache conflict accident. From tim.one at home.com Fri Feb 2 09:39:40 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 03:39:40 -0500 Subject: [Python-Dev] Case sensitive import Message-ID: [Steven D. Majewski] > ... > Is there any consensus on how to deal with this ? No, else it would have been done already. > ... > So it appears that I don't understand the issues on other > platforms and what CHECK_IMPORT_CASE intends to fix. It started on Windows. The belief was that people (not developers -- your personal testimony doesn't count, and neither does mine <0.3 wink>) on case-insensitive file systems don't pay much attention to the case of names they type. So the belief was (perhaps it even happened -- I wasn't paying attention at the time, since I was a Unix Dweeb then) people would carelessly write, e.g., import String and then pick up some accidental String.py module instead of the builtin "string" they intended. So Python started checking for case-match on Windows, and griping if the *first* module name Windows returns didn't match case exactly. OK, it's actually more complicated than that, because some network filesystems used on Windows actually changed all filenames to uppercase. So there's an exception made for that wart too. Anyway, looks like a blind guess to me whether this actually does anyone any good. For efficiency, it *does* stop at the first, so if the user typed import string *intending* to import String.py, they'd never hear about their mistake. So it doesn't really address the whole (putative) problem regardless. It only gripes if the first case-insensitive match on the path doesn't match exactly. However, *if* it makes sense on Windows, then it makes exactly as much sense on "the standard filesystem ... Apple's HFS+, which is case preserving but case insensitive" -- same deal as Windows. I see no reason to believe that non-developer users on Macs are going to be more case-savvy than on Windows (or is there a reason to believe that?). Another wart is that it's easy to create Python modules that import fine on Unix, but blow up if you try to run them on Windows (or HFS+). That sucks too, and isn't just theoretical (although in practice it's a lot less common than tracking down binary files opened in text mode!). The Cygwin people have a related problem: they *are* trying to emulate Unix, but doing so on a Windows box, so, umm, enjoy the best of all worlds. I'd rather see the same rule used everywhere (keep going until finding an exact match), and tough beans to the person who writes import String on Windows (or Mac) intending "string". Windows probably still needs a unique wart to deal with case-destroying network filesystems, though. It's still terrible style to *rely* on case-sensitivity in file names, and all such crap should be purged from the Python distribution regardless. guido-will-agree-with-exactly-one-of-these-claims -ly y'rs - tim From mal at lemburg.com Fri Feb 2 10:01:34 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 10:01:34 +0100 Subject: [Python-Dev] Showstopper in import? References: Message-ID: <3A7A776E.6ECC626E@lemburg.com> Tim Peters wrote: > > Turns out IDLE no longer runs. Starting at line 88 of > Tools/idle/EditorWindow.py we have this class defn: > > class EditorWindow: > > from Percolator import Percolator > from ColorDelegator import ColorDelegator > from UndoDelegator import UndoDelegator > from IOBinding import IOBinding > import Bindings > from Tkinter import Toplevel > from MultiStatusBar import MultiStatusBar > > about_title = about_title > ... > > This leads to what looks like a bug (if we're to believe the error msg, > which doesn't mean what it says): > > C:\Pyk>python tools/idle/idle.pyw > Traceback (most recent call last): > File "tools/idle/idle.pyw", line 2, in ? > import idle > File "C:\PYK\Tools\idle\idle.py", line 11, in ? > import PyShell > File "C:\PYK\Tools\idle\PyShell.py", line 15, in ? > from EditorWindow import EditorWindow, fixwordbreaks > File "C:\PYK\Tools\idle\EditorWindow.py", line 88, in ? > class EditorWindow: > File "C:\PYK\Tools\idle\EditorWindow.py", line 90, in EditorWindow > from Percolator import Percolator > SyntaxError: 'from ... import *' may only occur in a module scope > Hit return to exit... I have already reported this to Jeremy. There are other instances of 'from x import *' in function and class scope too, e.g. some test() functions in the standard dist do this. I am repeating myself here, but I think that this single change will cause so many people to find their scripts are failing that it is really not worth it. Better issue a warning than raise an exception here ! -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From jack at oratrix.nl Fri Feb 2 10:45:34 2001 From: jack at oratrix.nl (Jack Jansen) Date: Fri, 02 Feb 2001 10:45:34 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _testmodule.c,NONE,1.1 In-Reply-To: Message by Tim Peters , Thu, 01 Feb 2001 21:57:17 -0800 , Message-ID: <20010202094535.7582E373C95@snelboot.oratrix.nl> Is "_test" a good choice of name for this module? It feels a bit too generic, isn't something like _test_api (or _test_python_c_api) better? -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm From tim.one at home.com Fri Feb 2 10:50:36 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 04:50:36 -0500 Subject: [Python-Dev] Showstopper in import? In-Reply-To: <3A7A776E.6ECC626E@lemburg.com> Message-ID: [M.-A. Lemburg] > I have already reported this to Jeremy. There are other instances > of 'from x import *' in function and class scope too, e.g. > some test() functions in the standard dist do this. But there are no instances of "from x import *" in the case I reported, despite that the error msg (erroneously!) claimed there was. It's complaining about from Percolator import Percolator in a class definition. That smells like a bug, not a debatable design choice. > I am repeating myself here, but I think that this single change > will cause so many people to find their scripts are failing > that it is really not worth it. Provided the case above is fixed, IDLE will indeed fail to compile anyway, because Guido does from Tkinter import * inside several functions. But that's a different problem. > Better issue a warning than raise an exception here ! If Jeremy can't generate correct code, a warning is too weak. From mal at lemburg.com Fri Feb 2 11:00:28 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 11:00:28 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A7A853C.B38C1DF5@lemburg.com> Jeremy Hylton wrote: > > >>>>> "MAL" == M -A Lemburg writes: > > MAL> Tim Peters wrote: > >> > >> [Michael Hudson] > >> > ... Can anyone try this on Windows? Seeing as windows malloc > >> > reputedly sucks, maybe the differences would be bigger. > >> > >> No time now (pymalloc is a non-starter for 2.1). Was tried in > >> the past on Windows. Helped significantly. Unclear how much was > >> simply due to exploiting the global interpreter lock, though. > >> "Windows" is also a multiheaded beast (e.g., NT has very > >> different memory performance characteristics than 95). > > MAL> We're still in alpha, no ? > > The last planned alpha is going to be released tonight or early > tomorrow. I'm reluctant to add a large patch that I'm unfamiliar with > in the last 24 hours before the release. > > MAL> Or do we have a 2.1 feature freeze already ? > > We aren't adding any major new features that haven't been PEPed. I'd > like to see a PEP on this subject. I don't see a PEP for your import patch either ;-) Seriously, I am viewing the addition of pymalloc during the alpha phase or even the betas as test for the usability of such an approach. If it fails, fine, then we take it out again. If nobody notices, great, then leave it in. There would be a need for a PEP if we need to discuss APIs, interfaces, etc. but all this has already been done by Valdimir a long time ago. He put much effort into getting the Python malloc macros to work in the intended way so that pymalloc only has exchange these macro definitions. I don't understand why we cannot take the risk of trying this out in an alpha version. Besides, Vladimir's malloc patch is opt-in: you have to compile Python using --with-pymalloc to enable it, so it doesn't really harm anyone not knowing what he/she is doing. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Fri Feb 2 11:05:41 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 05:05:41 -0500 Subject: [Python-Dev] RE: [Python-checkins] CVS: python/dist/src/Modules _testmodule.c,NONE,1.1 In-Reply-To: <20010202094535.7582E373C95@snelboot.oratrix.nl> Message-ID: [Jack Jansen] > Is "_test" a good choice of name for this module? It feels a bit > too generic, isn't something like _test_api (or _test_python_c_api) > better? If someone feels strongly about that (I don't), feel free to change the name for 2.1b1. From mal at lemburg.com Fri Feb 2 11:08:16 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 11:08:16 +0100 Subject: [Python-Dev] Showstopper in import? References: Message-ID: <3A7A8710.D8A51718@lemburg.com> Tim Peters wrote: > > [M.-A. Lemburg] > > I have already reported this to Jeremy. There are other instances > > of 'from x import *' in function and class scope too, e.g. > > some test() functions in the standard dist do this. > > But there are no instances of "from x import *" in the case I reported, > despite that the error msg (erroneously!) claimed there was. It's > complaining about > > from Percolator import Percolator > > in a class definition. That smells like a bug, not a debatable design > choice. Percolator has "from x import *" code. This is what is causing the exception. I think it has already been fixed in CVS though, so should work again. > > I am repeating myself here, but I think that this single change > > will cause so many people to find their scripts are failing > > that it is really not worth it. > > Provided the case above is fixed, IDLE will indeed fail to compile anyway, > because Guido does > > from Tkinter import * > > inside several functions. But that's a different problem. How is it different ? Even though I agree that "from x import *" is bad style, it is quite common in testing code or code which imports a set of symbols from generated modules or modules containing only constants e.g. for protocols, error codes, etc. > > Better issue a warning than raise an exception here ! > > If Jeremy can't generate correct code, a warning is too weak. So this is the price we pay for having nested scopes... :-( -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Fri Feb 2 11:35:16 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 05:35:16 -0500 Subject: [Python-Dev] Showstopper in import? In-Reply-To: <3A7A8710.D8A51718@lemburg.com> Message-ID: > Percolator has "from x import *" code. This is what is causing the > exception. Woo hoo! The traceback bamboozled me: it doesn't show any code from Percolator.py, just the import in EditorWindow.py. So I'll call *that* the bug <0.7 wink>. > I think it has already been fixed in CVS though, so should > work again. Doesn't work for me. If someone does patch Percolator.py, though, it will just blow up again at from IOBinding import IOBinding . Guido was apparently fond of this trick. > Even though I agree that "from x import *" > is bad style, it is quite common in testing code or code > which imports a set of symbols from generated modules or > modules containing only constants e.g. for protocols, error > codes, etc. I know I'm being brief, but please don't take that as disagreement. It's heading on 6 in the morning here and I've been plugging away at the release for a loooong time. I'm not in favor of banning "from x import *" if there's an alternative. But I don't grok the implementation issues in this area well enough right now to address it; I'm also hoping that Jeremy can, and much more quickly. >>> Better issue a warning than raise an exception here ! >> If Jeremy can't generate correct code, a warning is too weak. > So this is the price we pay for having nested scopes... :-( I don't know. It apparently is the state of the code at this instant. sleeping-on-it<0.1-wink>-ly y'rs - tim From mal at lemburg.com Fri Feb 2 12:38:07 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 12:38:07 +0100 Subject: [Python-Dev] Showstopper in import? References: Message-ID: <3A7A9C1F.7A8619AE@lemburg.com> Tim Peters wrote: > > > Percolator has "from x import *" code. This is what is causing the > > exception. > > Woo hoo! The traceback bamboozled me: it doesn't show any code from > Percolator.py, just the import in EditorWindow.py. So I'll call *that* the > bug <0.7 wink>. > > > I think it has already been fixed in CVS though, so should > > work again. > > Doesn't work for me. If someone does patch Percolator.py, though, it will > just blow up again at > > from IOBinding import IOBinding > > . Guido was apparently fond of this trick. For completeness, here are all instance I've found in the standard dist: ./Tools/pynche/pyColorChooser.py: -- from Tkinter import * ./Tools/idle/IOBinding.py: -- from Tkinter import * ./Tools/idle/Percolator.py: -- from Tkinter import * > > Even though I agree that "from x import *" > > is bad style, it is quite common in testing code or code > > which imports a set of symbols from generated modules or > > modules containing only constants e.g. for protocols, error > > codes, etc. > > I know I'm being brief, but please don't take that as disagreement. It's > heading on 6 in the morning here and I've been plugging away at the release > for a loooong time. I'm not in favor of banning "from x import *" if > there's an alternative. But I don't grok the implementation issues in this > area well enough right now to address it; I'm also hoping that Jeremy can, > and much more quickly. > > >>> Better issue a warning than raise an exception here ! > > >> If Jeremy can't generate correct code, a warning is too weak. > > > So this is the price we pay for having nested scopes... :-( > > I don't know. It apparently is the state of the code at this instant. Ok, Good Night then :-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thomas at xs4all.net Fri Feb 2 13:06:54 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 2 Feb 2001 13:06:54 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: <3A7A853C.B38C1DF5@lemburg.com>; from mal@lemburg.com on Fri, Feb 02, 2001 at 11:00:28AM +0100 References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> Message-ID: <20010202130654.T962@xs4all.nl> On Fri, Feb 02, 2001 at 11:00:28AM +0100, M.-A. Lemburg wrote: > There would be a need for a PEP if we need to discuss APIs, > interfaces, etc. but all this has already been done by Valdimir > a long time ago. He put much effort into getting the Python > malloc macros to work in the intended way so that pymalloc only > has exchange these macro definitions. > I don't understand why we cannot take the risk of trying this > out in an alpha version. Besides, Vladimir's malloc patch > is opt-in: you have to compile Python using --with-pymalloc > to enable it, so it doesn't really harm anyone not knowing what > he/she is doing. +1 on putting it in, in alpha2 or beta1, on an opt-in basis. +0 on putting it in *now* (alpha2, not beta1) and on by default. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mal at lemburg.com Fri Feb 2 13:08:32 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 13:08:32 +0100 Subject: [Python-Dev] Quick Unix work needed References: Message-ID: <3A7AA340.B3AFF106@lemburg.com> Tim Peters wrote: > > Trent Mick's C API testing framework has been checked in, along with > everything needed to get it working on Windows: > > http://sourceforge.net/patch/?func=detailpatch&patch_id=101162& > group_id=5470 > > It still needs someone to add it to the Unixish builds. Done. > You'll know that it worked if the new std test test_capi.py succeeds. The test passes just fine... nothing much there which could fail ;-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Fri Feb 2 13:14:33 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 13:14:33 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> Message-ID: <3A7AA4A9.56F54EFF@lemburg.com> Thomas Wouters wrote: > > On Fri, Feb 02, 2001 at 11:00:28AM +0100, M.-A. Lemburg wrote: > > > There would be a need for a PEP if we need to discuss APIs, > > interfaces, etc. but all this has already been done by Valdimir > > a long time ago. He put much effort into getting the Python > > malloc macros to work in the intended way so that pymalloc only > > has exchange these macro definitions. > > > I don't understand why we cannot take the risk of trying this > > out in an alpha version. Besides, Vladimir's malloc patch > > is opt-in: you have to compile Python using --with-pymalloc > > to enable it, so it doesn't really harm anyone not knowing what > > he/she is doing. > > +1 on putting it in, in alpha2 or beta1, on an opt-in basis. +0 on putting > it in *now* (alpha2, not beta1) and on by default. Anyone else for adding it now on an opt-in basis ? BTW, here is the URL to the pymalloc page: http://starship.python.net/~vlad/pymalloc/ -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mwh21 at cam.ac.uk Fri Feb 2 13:24:32 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 02 Feb 2001 12:24:32 +0000 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: "M.-A. Lemburg"'s message of "Fri, 02 Feb 2001 13:14:33 +0100" References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <3A7AA4A9.56F54EFF@lemburg.com> Message-ID: "M.-A. Lemburg" writes: > Thomas Wouters wrote: > > > > On Fri, Feb 02, 2001 at 11:00:28AM +0100, M.-A. Lemburg wrote: > > > > > There would be a need for a PEP if we need to discuss APIs, > > > interfaces, etc. but all this has already been done by Valdimir > > > a long time ago. He put much effort into getting the Python > > > malloc macros to work in the intended way so that pymalloc only > > > has exchange these macro definitions. > > > > > I don't understand why we cannot take the risk of trying this > > > out in an alpha version. Besides, Vladimir's malloc patch > > > is opt-in: you have to compile Python using --with-pymalloc > > > to enable it, so it doesn't really harm anyone not knowing what > > > he/she is doing. > > > > +1 on putting it in, in alpha2 or beta1, on an opt-in basis. +0 on putting > > it in *now* (alpha2, not beta1) and on by default. > > Anyone else for adding it now on an opt-in basis ? Yes. I also want to try adding it in and then scrapping the free list management done by ints, frames, etc. and seeing if it this results in any significant slowdown. Don't have time for another mega-benchmark just now though. Cheers, M. -- 3. Syntactic sugar causes cancer of the semicolon. -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html From fredrik at pythonware.com Fri Feb 2 13:22:13 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Fri, 2 Feb 2001 13:22:13 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <3A7AA4A9.56F54EFF@lemburg.com> Message-ID: <020501c08d12$c63c6b30$0900a8c0@SPIFF> mal wrote: > Anyone else for adding it now on an opt-in basis ? +1 from here. Cheers /F From thomas at xs4all.net Fri Feb 2 13:29:53 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 2 Feb 2001 13:29:53 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: ; from mwh21@cam.ac.uk on Fri, Feb 02, 2001 at 12:24:32PM +0000 References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <3A7AA4A9.56F54EFF@lemburg.com> Message-ID: <20010202132953.I922@xs4all.nl> On Fri, Feb 02, 2001 at 12:24:32PM +0000, Michael Hudson wrote: > > Anyone else for adding [pyobjmalloc] now on an opt-in basis ? > Yes. I also want to try adding it in and then scrapping the free list > management done by ints, frames, etc. and seeing if it this results in > any significant slowdown. Don't have time for another mega-benchmark > just now though. We could (and probably should) delay that for 2.2 anyway. Make pymalloc default on, and do some standardized benchmarking on a number of different platforms, with and without the typespecific freelists. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mwh21 at cam.ac.uk Fri Feb 2 13:39:08 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 02 Feb 2001 12:39:08 +0000 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: Thomas Wouters's message of "Fri, 2 Feb 2001 13:29:53 +0100" References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <3A7AA4A9.56F54EFF@lemburg.com> <20010202132953.I922@xs4all.nl> Message-ID: Thomas Wouters writes: > On Fri, Feb 02, 2001 at 12:24:32PM +0000, Michael Hudson wrote: > > > > Anyone else for adding [pyobjmalloc] now on an opt-in basis ? > > > Yes. I also want to try adding it in and then scrapping the free list > > management done by ints, frames, etc. and seeing if it this results in > > any significant slowdown. Don't have time for another mega-benchmark > > just now though. > > We could (and probably should) delay that for 2.2 anyway. Uhh, yes. I meant to say that too. Must remember to finish my posts... > Make pymalloc default on, and do some standardized benchmarking on a > number of different platforms, with and without the typespecific > freelists. Yes. This will take time, but is worthwhile, IMHO. Cheers, M. -- C is not clean -- the language has _many_ gotchas and traps, and although its semantics are _simple_ in some sense, it is not any cleaner than the assembly-language design it is based on. -- Erik Naggum, comp.lang.lisp From moshez at zadka.site.co.il Fri Feb 2 13:55:55 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Fri, 2 Feb 2001 14:55:55 +0200 (IST) Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: <3A7AA4A9.56F54EFF@lemburg.com> References: <3A7AA4A9.56F54EFF@lemburg.com>, <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> Message-ID: <20010202125555.C81EDA840@darjeeling.zadka.site.co.il> On Fri, 02 Feb 2001 13:14:33 +0100, "M.-A. Lemburg" wrote: > Anyone else for adding it now on an opt-in basis ? Add it on opt-out basis, and if it causes trouble, revert to opt-in in beta/final. Alphas are supposed to be buggy <0.7 wink> -- Moshe Zadka This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From mwh21 at cam.ac.uk Fri Feb 2 14:15:14 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 02 Feb 2001 13:15:14 +0000 Subject: [Python-Dev] Showstopper in import? In-Reply-To: "Tim Peters"'s message of "Fri, 2 Feb 2001 05:35:16 -0500" References: Message-ID: "Tim Peters" writes: > > Percolator has "from x import *" code. This is what is causing the > > exception. > > Woo hoo! The traceback bamboozled me: it doesn't show any code from > Percolator.py, just the import in EditorWindow.py. So I'll call *that* the > bug <0.7 wink>. > > > I think it has already been fixed in CVS though, so should > > work again. > > Doesn't work for me. If someone does patch Percolator.py, though, it will > just blow up again at > > from IOBinding import IOBinding > > . Guido was apparently fond of this trick. I apologise if I'm explaining things people already know here, but I can explain the wierdo tracebacks. Try this: >>> def f(): ... from string import * ... pass ... SyntaxError: 'from ... import *' may only occur in a module scope >>> you see? No traceback at all! This is a general feature of exceptions raised by the compiler (as opposed to the parser). >>> 21323124912094230491 OverflowError: integer literal too large >>> (also using some name other than "as" in an "import as" statement, invalid unicode \N{names}, various arglist nogos (eg. "def f(a=1,e):"), assigning to an expression, ... the list goes on & is getting longer). So what's happening is module A imports module B, which fails to copmile due to a non-module level "import *", but doesn't set up a traceback, so the traceback points at the import statement in module A. And as the exception message mentions import statements, everyone gets confused. The fix? Presumably rigging compile.c:com_error to set up tracebacks properly? It looks like it *tries* to, but I don't know this area of the code well enough to understand why it doesn't work. Anyone? Cheers, M. -- same software, different verbosity settings (this one goes to eleven) -- the effbot on the martellibot From thomas at xs4all.net Fri Feb 2 14:31:44 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 2 Feb 2001 14:31:44 +0100 Subject: [Python-Dev] Showstopper in import? In-Reply-To: ; from mwh21@cam.ac.uk on Fri, Feb 02, 2001 at 01:15:14PM +0000 References: Message-ID: <20010202143144.U962@xs4all.nl> On Fri, Feb 02, 2001 at 01:15:14PM +0000, Michael Hudson wrote: [ Compiler exceptions (as opposed to runtime exceptions) sucks ] > The fix? Presumably rigging compile.c:com_error to set up tracebacks > properly? It looks like it *tries* to, but I don't know this area of > the code well enough to understand why it doesn't work. Anyone? Have you seen this ? http://sourceforge.net/patch/?func=detailpatch&patch_id=101782&group_id=5470 -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mwh21 at cam.ac.uk Fri Feb 2 14:37:39 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 02 Feb 2001 13:37:39 +0000 Subject: [Python-Dev] Showstopper in import? In-Reply-To: Thomas Wouters's message of "Fri, 2 Feb 2001 14:31:44 +0100" References: <20010202143144.U962@xs4all.nl> Message-ID: Thomas Wouters writes: > On Fri, Feb 02, 2001 at 01:15:14PM +0000, Michael Hudson wrote: > > [ Compiler exceptions (as opposed to runtime exceptions) sucks ] > > > The fix? Presumably rigging compile.c:com_error to set up tracebacks > > properly? It looks like it *tries* to, but I don't know this area of > > the code well enough to understand why it doesn't work. Anyone? > > Have you seen this ? > > http://sourceforge.net/patch/?func=detailpatch&patch_id=101782&group_id=5470 No, and it doesn't patch cleanly right now and I haven't got the time to sort that out just yet, but if it works, it should go in! Cheers, M. -- To summarise the summary of the summary:- people are a problem. -- The Hitch-Hikers Guide to the Galaxy, Episode 12 From mal at lemburg.com Fri Feb 2 14:58:05 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 14:58:05 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: <3A7AA4A9.56F54EFF@lemburg.com>, <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <20010202125555.C81EDA840@darjeeling.zadka.site.co.il> Message-ID: <3A7ABCED.8435D5B7@lemburg.com> Moshe Zadka wrote: > > On Fri, 02 Feb 2001 13:14:33 +0100, "M.-A. Lemburg" wrote: > > > Anyone else for adding it now on an opt-in basis ? > > Add it on opt-out basis, and if it causes trouble, revert to opt-in > in beta/final. Alphas are supposed to be buggy <0.7 wink> Ok, that makes +5 on including it, no negative response so far. We'll only have to sort out whether to make it opt-in (the current state of the patch) or opt-out. The latter would certainly enable better testing of the code, but I understand that Jeremy doesn't want to destabilize the release just now. Perhaps we'll need a third alpha release ?! (the weak reference implementation and the other goodies need much more testing IMHO than just one alpha cycle) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From barry at digicool.com Fri Feb 2 15:13:22 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 09:13:22 -0500 Subject: [Python-Dev] Showstopper in import? References: <3A7A776E.6ECC626E@lemburg.com> Message-ID: <14970.49282.501200.102133@anthem.wooz.org> >>>>> "TP" == Tim Peters writes: TP> Provided the case above is fixed, IDLE will indeed fail to TP> compile anyway, because Guido does TP> from Tkinter import * TP> inside several functions. But that's a different problem. That will probably be the most common breakage in existing code. I've already `fixed' the one such occurance in Tools/pynche. gotta-love-alphas-ly y'rs, -Barry From fredrik at pythonware.com Fri Feb 2 15:14:30 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Fri, 2 Feb 2001 15:14:30 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: <3A7AA4A9.56F54EFF@lemburg.com>, <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <20010202125555.C81EDA840@darjeeling.zadka.site.co.il> <3A7ABCED.8435D5B7@lemburg.com> Message-ID: <000701c08d22$763911f0$0900a8c0@SPIFF> mal wrote: > We'll only have to sort out whether to make it opt-in (the > current state of the patch) or opt-out. The latter would > certainly enable better testing of the code, but I understand > that Jeremy doesn't want to destabilize the release just now. > > Perhaps we'll need a third alpha release ?! (the weak reference > implementation and the other goodies need much more testing > IMHO than just one alpha cycle) +1 on opt-out and an extra alpha to hammer on weak refs, nested namespaces, and pymalloc. +0 on pymalloc opt-in and no third alpha -1 on function attri, oops, to late. Cheers /F From barry at digicool.com Fri Feb 2 15:19:36 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 09:19:36 -0500 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> Message-ID: <14970.49656.634425.131854@anthem.wooz.org> >>>>> "M" == M writes: M> I don't understand why we cannot take the risk of trying this M> out in an alpha version. Logistically, we probably need BDFL pronouncement on this and if we're to get alpha2 out today, that won't happen in time. If we don't get the alpha out today, we probably will not get the first beta out by IPC9, and I think that's important too. So I'd be +1 on adding it opt-in for beta1, which would make the code available to all, and allow us the full beta cycle and 2.2 development cycle to do the micro benchmarks and evaluation for opt-out (or simply always on) in 2.2. -Barry From mal at lemburg.com Fri Feb 2 15:57:18 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 15:57:18 +0100 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <14970.49656.634425.131854@anthem.wooz.org> Message-ID: <3A7ACACE.679D372@lemburg.com> "Barry A. Warsaw" wrote: > > >>>>> "M" == M writes: > > M> I don't understand why we cannot take the risk of trying this > M> out in an alpha version. > > Logistically, we probably need BDFL pronouncement on this and if we're > to get alpha2 out today, that won't happen in time. If we don't get > the alpha out today, we probably will not get the first beta out by > IPC9, and I think that's important too. With the recent additions of rather important changes I see the need for a third alpha, so getting the beta out for IPC9 will probably not work anyway. Let's get the alpha 2 out today and then add pymalloc to alpha 3. > So I'd be +1 on adding it opt-in for beta1, which would make the code > available to all, and allow us the full beta cycle and 2.2 development > cycle to do the micro benchmarks and evaluation for opt-out (or simply > always on) in 2.2. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From vladimir.marangozov at optimay.com Fri Feb 2 16:10:05 2001 From: vladimir.marangozov at optimay.com (Vladimir Marangozov) Date: Fri, 2 Feb 2001 16:10:05 +0100 Subject: [Python-Dev] A word from the author (was "pymalloc", was "fun", was "2.1 slowe r than 2.0") Message-ID: <4C99842BC5F6D411A6A000805FBBB199051F5B@ge0057exch01.micro.lucent.com> Hi all, [MAL] > >>>>> "M" == M writes: > > M> I don't understand why we cannot take the risk of trying this > M> out in an alpha version. Because the risk (long-term) is kind of unknown. obmalloc works fine, and it speeds things up, yes, in most setups or circumstances. It gains that speed from the Python core "memory pattern", which is by far the dominant, no matter what the app is. Tim's statement about my profiling is kind of a guess (Hi Tim!) [Barry] > > Logistically, we probably need BDFL pronouncement on this and if we're > to get alpha2 out today, that won't happen in time. If we don't get > the alpha out today, we probably will not get the first beta out by > IPC9, and I think that's important too. > > So I'd be +1 on adding it opt-in for beta1, which would make the code > available to all, and allow us the full beta cycle and 2.2 development > cycle to do the micro benchmarks and evaluation for opt-out (or simply > always on) in 2.2. I'd say, opt-in for 2.1. No risk, enables profiling. My main reservation is about thread safety from extensions (but this could be dealt with at a later stage) + a couple of other minor things I have no time to explain right now. But by that time (2.2), I do plan to show up on a more regular basis. Phew! You guys have done a lot for 3 months. I'll need another three to catch up . Cheers, Vladimir From skip at mojam.com Fri Feb 2 16:34:04 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 09:34:04 -0600 (CST) Subject: [Python-Dev] creating __all__ in extension modules Message-ID: <14970.54124.352613.111534@beluga.mojam.com> I'm diving into adding __all__ lists to extension modules. My assumption is that since it is a much more deliberate decision to add a symbol to an extension module's module dict, that any key in the module's dict that doesn't begin with an underscore is to be exported. (This in contrast to Python modules where lots of cruft creeps in.) If you think this assumption is incorrect and some other approach is needed, speak now. Thanks, Skip From fredrik at effbot.org Fri Feb 2 16:54:13 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Fri, 2 Feb 2001 16:54:13 +0100 Subject: [Python-Dev] creating __all__ in extension modules References: <14970.54124.352613.111534@beluga.mojam.com> Message-ID: <034f01c08d30$65e5cec0$e46940d5@hagrid> Skip Montanaro wrote: > I'm diving into adding __all__ lists to extension modules. My assumption is > that since it is a much more deliberate decision to add a symbol to an > extension module's module dict, that any key in the module's dict that > doesn't begin with an underscore is to be exported. what's the point? doesn't from-import already do exactly that on C extensions? From jeremy at alum.mit.edu Fri Feb 2 16:51:26 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 2 Feb 2001 10:51:26 -0500 (EST) Subject: [Python-Dev] Showstopper in import? In-Reply-To: References: <3A7A8710.D8A51718@lemburg.com> Message-ID: <14970.55166.436171.625668@w221.z064000254.bwi-md.dsl.cnc.net> MAL> Better issue a warning than raise an exception here ! TP> If Jeremy can't generate correct code, a warning is too weak. MAL> So this is the price we pay for having nested scopes... :-( TP> I don't know. It apparently is the state of the code at this TP> instant. The code is complaining about 'from ... import *' with nested scopes, because of a potential ambiguity: def f(): from string import * def g(s): return strip(s) It is unclear whether this code intends to use a global named strip or to the name strip defined in f() by 'from string import *'. It is possible, I'm sure, to complain about only those cases where free variables exist in a nested scope and 'from ... import *' is used. I don't know if I will be able to modify the compiler so it complains about *only* these cases in time for 2.1a2. Jeremy From fdrake at acm.org Fri Feb 2 16:48:27 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 2 Feb 2001 10:48:27 -0500 (EST) Subject: [Python-Dev] Doc tree frozen for 2.1a2 Message-ID: <14970.54987.29292.178440@cj42289-a.reston1.va.home.com> The Doc/ tree in the Python CVS is frozen until Python 2.1a2 has been released. No further changes should be made in that part of the tree. -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From jeremy at alum.mit.edu Fri Feb 2 16:54:42 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 2 Feb 2001 10:54:42 -0500 (EST) Subject: [Python-Dev] insertdict slower? In-Reply-To: References: Message-ID: <14970.55362.332519.654243@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TP" == Tim Peters writes: TP> [Jeremy] >> I was curious about what the DictCreation microbenchmark in >> pybench was slower (about 15%) with 2.1 than with 2.0. I ran >> both with profiling enabled (-pg, no -O) and see that insertdict >> is a fair bit slower in 2.1. Anyone with dict implementation >> expertise want to hazard a guess about this? TP> You don't need to be an expert for this one: just look at the TP> code! There's nothing to it, and not even a comment has changed TP> in insertdict since 2.0. I don't believe the profile. [...] TP> So you're looking at a buggy profiler, a buggy profiling TP> procedure, or a Cache Mystery (the catch-all excuse for anything TP> that's incomprehensible without HW-level monitoring tools). TP> [...] I wanted to be sure that some other change to the dictionary code didn't have the unintended consequence of slowing down insertdict. There is a real and measurable slowdown in MAL's DictCreation microbenchmark, which needs to be explained somehow. insertdict sounds more plausible than many other explanations. But I don't have any more time to think about this before the release. Jeremy From mal at lemburg.com Fri Feb 2 17:40:00 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 17:40:00 +0100 Subject: [Python-Dev] Showstopper in import? References: <3A7A8710.D8A51718@lemburg.com> <14970.55166.436171.625668@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A7AE2DF.A2D17129@lemburg.com> Jeremy Hylton wrote: > > MAL> Better issue a warning than raise an exception here ! > > TP> If Jeremy can't generate correct code, a warning is too weak. > > MAL> So this is the price we pay for having nested scopes... :-( > > TP> I don't know. It apparently is the state of the code at this > TP> instant. > > The code is complaining about 'from ... import *' with nested scopes, > because of a potential ambiguity: > > def f(): > from string import * > def g(s): > return strip(s) > > It is unclear whether this code intends to use a global named strip or > to the name strip defined in f() by 'from string import *'. The right thing to do in this situation is for Python to walk up the nested scopes and look for the "strip" symbol. > It is possible, I'm sure, to complain about only those cases where > free variables exist in a nested scope and 'from ... import *' is > used. I don't know if I will be able to modify the compiler so it > complains about *only* these cases in time for 2.1a2. Since this is backward compatible, wouldn't it suffice to simply use LOAD_GLOBAL for all nested scopes below the first scope which uses from ... import * ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From jeremy at alum.mit.edu Fri Feb 2 18:07:55 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 2 Feb 2001 12:07:55 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: References: Message-ID: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SDM" == Steven D Majewski writes: SDM> I see from one of the comments on my patch #103459 that there SDM> is a history to this issue (patch #103154) SDM> I had assumed that renaming modules and possibly breaking SDM> existing code was not an option, but this seems to have been SDM> considered in the discussion on that earlier patch. SDM> Is there any consensus on how to deal with this ? SDM> I would *really* like to get SOME fix -- either my patch, or a SDM> renaming of FCNTL, TERMIOS, SOCKET, into the next release. Our plan is to remove all of these modules and move the constants they define into the modules that provide the interface. Fred has already removed SOCKET, since all the constants are defined in socket. I don't think we'll get to the others in time for 2.1a2. SDM> It's not clear to me whether the issues on other systems are SDM> the same. On mac-osx, the OS is BSD unix based and when using SDM> a unix file system, it's case sensitive. But the standard SDM> filesystem is Apple's HFS+, which is case preserving but case SDM> insensitive. ( That means that opening "abc" will succeed if SDM> there is a file named "abc", "ABC", "Abc" , "aBc" ... , but a SDM> directory listing will show "abc" ) SDM> I had guessed that the CHECK_IMPORT_CASE ifdefs and the SDM> corresponding configure switch were there for this sort of SDM> problem, and all I had to do was add a macosx implementation of SDM> check_case(), but returning false from check_case causes the SDM> search to fail -- it does not continue until it find a matching SDM> module. Guido is strongly opposed to continuing after check_case returns false. His explanation is that imports ought to work whether all the there are multiple directories on sys.path or all the files are copied into a single directory. Obviously on file systems like HFS+, it would be impossible to have FCNTL.py and fcntl.py be in the same directory. Jeremy From skip at mojam.com Fri Feb 2 18:14:51 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 11:14:51 -0600 (CST) Subject: [Python-Dev] Showstopper in import? In-Reply-To: <3A7A8710.D8A51718@lemburg.com> References: <3A7A8710.D8A51718@lemburg.com> Message-ID: <14970.60171.311859.92551@beluga.mojam.com> MAL> Even though I agree that "from x import *" is bad style, it is MAL> quite common in testing code or code which imports a set of symbols MAL> from generated modules or modules containing only constants MAL> e.g. for protocols, error codes, etc. In fact, the entire exercise of making "from x import *" obey __all__ when it's present is to at least reduce the "badness" of this bad style. Skip From skip at mojam.com Fri Feb 2 18:16:40 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 11:16:40 -0600 (CST) Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) In-Reply-To: <3A7AA4A9.56F54EFF@lemburg.com> References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <20010202130654.T962@xs4all.nl> <3A7AA4A9.56F54EFF@lemburg.com> Message-ID: <14970.60280.654349.189487@beluga.mojam.com> MAL> Anyone else for adding it now on an opt-in basis ? +1 from me. Skip From sdm7g at virginia.edu Fri Feb 2 18:18:40 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 2 Feb 2001 12:18:40 -0500 (EST) Subject: [Python-Dev] Case sensitive import In-Reply-To: Message-ID: On Fri, 2 Feb 2001, Tim Peters wrote: > I'd rather see the same rule used everywhere (keep going until finding an > exact match), and tough beans to the person who writes > > import String > > on Windows (or Mac) intending "string". Windows probably still needs a > unique wart to deal with case-destroying network filesystems, though. I agree, and that's what my patch does for macosx.darwin (or any unixy system that happens to have a filesystem with similar semantics -- if there is any such beast.) If the issues for windows are different (and it sounds like they are) then I wanted to make sure (collectively) you were aware that this patch could be addressed independently, rather than waiting on a resolution of those other problems. > It's still terrible style to *rely* on case-sensitivity in file names, and > all such crap should be purged from the Python distribution regardless. I agree. However, even if we purged all only-case-differing file names, without a patch on macosx, you still can crash python with a miscase typo, as it'll try to import the same module twice under a different name: >>> import cStringIO >>> import cstringio dyld: python2.0 multiple definitions of symbol _initcStringIO /usr/local/lib/python2.0/lib-dynload/cStringIO.so definition of _initcStringIO /usr/local/lib/python2.0/lib-dynload/cstringio.so definition of _initcStringIO while with the patch, I get: ImportError: No module named cstringio ---| Steven D. Majewski (804-982-0831) |--- ---| Department of Molecular Physiology and Biological Physics |--- ---| University of Virginia Health Sciences Center |--- ---| P.O. Box 10011 Charlottesville, VA 22906-0011 |--- "All operating systems want to be unix, All programming languages want to be lisp." From mal at lemburg.com Fri Feb 2 18:19:20 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 02 Feb 2001 18:19:20 +0100 Subject: [Python-Dev] insertdict slower? References: <14970.55362.332519.654243@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A7AEC18.BEA891B@lemburg.com> Jeremy Hylton wrote: > > >>>>> "TP" == Tim Peters writes: > > TP> [Jeremy] > >> I was curious about what the DictCreation microbenchmark in > >> pybench was slower (about 15%) with 2.1 than with 2.0. I ran > >> both with profiling enabled (-pg, no -O) and see that insertdict > >> is a fair bit slower in 2.1. Anyone with dict implementation > >> expertise want to hazard a guess about this? > > TP> You don't need to be an expert for this one: just look at the > TP> code! There's nothing to it, and not even a comment has changed > TP> in insertdict since 2.0. I don't believe the profile. > > [...] > > TP> So you're looking at a buggy profiler, a buggy profiling > TP> procedure, or a Cache Mystery (the catch-all excuse for anything > TP> that's incomprehensible without HW-level monitoring tools). > TP> [...] > > I wanted to be sure that some other change to the dictionary code > didn't have the unintended consequence of slowing down insertdict. > There is a real and measurable slowdown in MAL's DictCreation > microbenchmark, which needs to be explained somehow. insertdict > sounds more plausible than many other explanations. But I don't have > any more time to think about this before the release. The benchmark uses integers as keys, so Fred's string optimization isn't used. Instead, PyObject_RichCompareBool() gets triggered and this probably causes the slowdown. You should notice a similar slowdown for all non-string keys. Since dictionaries only check for equality, perhaps we should tweak the rich compare implementation to provide a highly optimized implementation for this single case ?! -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From barry at digicool.com Fri Feb 2 18:23:55 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 12:23:55 -0500 Subject: [Python-Dev] Adding pymalloc to the core (Benchmarking "fun" (was Re: Python 2.1 slower than 2.0)) References: <3A7890AB.69B893F9@lemburg.com> <14969.58781.410229.433814@w221.z064000254.bwi-md.dsl.cnc.net> <3A7A853C.B38C1DF5@lemburg.com> <14970.49656.634425.131854@anthem.wooz.org> <3A7ACACE.679D372@lemburg.com> Message-ID: <14970.60715.484580.346561@anthem.wooz.org> >>>>> "M" == M writes: M> With the recent additions of rather important changes I see the M> need for a third alpha, so getting the beta out for IPC9 will M> probably not work anyway. M> Let's get the alpha 2 out today and then add pymalloc to alpha M> 3. It might be fun , then to have a bof or devday discussion about some of the new features. bringing-my-asbestos-longjohns-ly y'rs, -Barry From skip at mojam.com Fri Feb 2 18:24:30 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 11:24:30 -0600 (CST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <034f01c08d30$65e5cec0$e46940d5@hagrid> References: <14970.54124.352613.111534@beluga.mojam.com> <034f01c08d30$65e5cec0$e46940d5@hagrid> Message-ID: <14970.60750.570192.452062@beluga.mojam.com> Fredrik> what's the point? doesn't from-import already do exactly that Fredrik> on C extensions? Consider os. At one point it does "from posix import *". Okay, which symbols now in its local namespace came from posix and which from its own devices? It's a lot easier to do from posix import __all__ as _all __all__.extend(_all) del _all than to muck about importing posix, looping over its dict, then incorporating what it finds. It also makes things a bit more consistent for introspective tools. Skip From sdm7g at virginia.edu Fri Feb 2 18:46:23 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 2 Feb 2001 12:46:23 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: On Fri, 2 Feb 2001, Jeremy Hylton wrote: > > Our plan is to remove all of these modules and move the constants they > define into the modules that provide the interface. Fred has already > removed SOCKET, since all the constants are defined in socket. I > don't think we'll get to the others in time for 2.1a2. > > Guido is strongly opposed to continuing after check_case returns > false. His explanation is that imports ought to work whether all the > there are multiple directories on sys.path or all the files are copied > into a single directory. Obviously on file systems like HFS+, it > would be impossible to have FCNTL.py and fcntl.py be in the same > directory. This is in my previous message to the list, but since there seems to be (from my end, anyway) a long delay in list propagation, I'll repeat to you, Jeremy: The other problem is that without a patch, you can crash python with a mis-cased typo, as it tries to import the same module under two names: >>> import cStringIO >>> import cstringio dyld: python2.0 multiple definitions of symbol _initcStringIO /usr/local/lib/python2.0/lib-dynload/cStringIO.so definition of _initcStringIO /usr/local/lib/python2.0/lib-dynload/cstringio.so definition of _initcStringIO [ crash and burn back to shell prompt... ] instead of (with patch): >>> import cstringio Traceback (most recent call last): File " ", line 1, in ? ImportError: No module named cstringio >>> A .py module doesn't crash like a .so module, but it still yields two (or more) different modules for each case spelling, which could be the source of some pretty hard to find bugs when MyModule.val != mymodule.val. ( Which is a more innocent mistake than the person who actually writes two different files for MyModule.py and mymodule.py ! ) ---| Steven D. Majewski (804-982-0831) |--- ---| Department of Molecular Physiology and Biological Physics |--- ---| University of Virginia Health Sciences Center |--- ---| P.O. Box 10011 Charlottesville, VA 22906-0011 |--- "All operating systems want to be unix, All programming languages want to be lisp." From skip at mojam.com Fri Feb 2 18:54:24 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 11:54:24 -0600 (CST) Subject: [Python-Dev] Diamond x Jungle Carpet Python In-Reply-To: <20010202072422.6B673F4DD@mail.python.org> References: <20010202072422.6B673F4DD@mail.python.org> Message-ID: <14970.62544.580964.817325@beluga.mojam.com> Rod> I have several Diamond x Jungle Capret Pythons for SALE. Rod> Make me an offer.... I don't know Rod. Are they case-sensitive? What's their performance on regular expressions? Do they pass the 2.1a1 regression test suite? Have you been able to train them to understand function attributes? (Though the picture does show a lovely snake, I do believe you hit the wrong mailing list with your offer. The only python's we deal with here are the electronic programming language variety...) :-) -- Skip Montanaro (skip at mojam.com) Support Mojam & Musi-Cal: http://www.musi-cal.com/sponsor.shtml (847)971-7098 From skip at mojam.com Fri Feb 2 18:50:33 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 11:50:33 -0600 (CST) Subject: [Python-Dev] Case sensitive import In-Reply-To: References: Message-ID: <14970.62313.653086.107554@beluga.mojam.com> Tim> It's still terrible style to *rely* on case-sensitivity in file Tim> names, and all such crap should be purged from the Python Tim> distribution regardless. Then the Python directory or the python executable should be renamed. I sense some deja vu here... anyone-for-a.out?-ly y'rs, Skip From fdrake at acm.org Fri Feb 2 18:56:27 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 2 Feb 2001 12:56:27 -0500 (EST) Subject: [Python-Dev] Python 2.1 alpha 2 docs released Message-ID: <14970.62667.518807.370544@cj42289-a.reston1.va.home.com> The documentation for the Python 2.1 alpha 2 release is now available. View it online at: http://python.sourceforge.net/devel-docs/ (This version will be updated as the documentation evolves, so will be updated beyond what's in the downloadable packages.) Downloadable packages in many formats are also available at: ftp://ftp.python.org/pub/python/doc/2.1a2/ Please avoid printing this documentation -- it's for the alpha, and could waste entire forests! Thanks to everyone who has helped improve the documentation! As always, suggestions and bug reports are welcome. For more instructions on how to file bug reports and where to send suggestions for improvement, see: http://python.sourceforge.net/devel-docs/about.html -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From barry at digicool.com Fri Feb 2 19:34:59 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 13:34:59 -0500 Subject: [Python-Dev] Case sensitive import. References: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14970.64979.584372.4671@anthem.wooz.org> Steve, I'm tasked with look at your patch for 2.1a2, and I have some questions and issues (since I'm just spinning up on this). First, what is the relationship of patch #103495 with the Cygwin patch #103154? They look like they address similar issues. Would you say that yours subsumes 103154, or at least will solve some of the problems jlt63 talks about in his patch? The other problem is that I do not have a Cygwin system to test on, and my wife isn't (yet :) psyched for me to do much debugging on her Mac (which doesn't have MacOSX on it yet). The best I can do is make sure your patch applies cleanly and doesn't break the Linux build. Would that work for you for 2.1a2? -Barry From sdm7g at virginia.edu Fri Feb 2 19:46:32 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 2 Feb 2001 13:46:32 -0500 (EST) Subject: [Python-Dev] Case sensitive import In-Reply-To: <14970.62313.653086.107554@beluga.mojam.com> Message-ID: On Fri, 2 Feb 2001, Skip Montanaro wrote: > Tim> It's still terrible style to *rely* on case-sensitivity in file > Tim> names, and all such crap should be purged from the Python > Tim> distribution regardless. > > Then the Python directory or the python executable should be renamed. I > sense some deja vu here... > > anyone-for-a.out?-ly y'rs, I was going to suggest renaming the Python/ directory to Core/, but what happens when it tries to dump core ? -- Steve From barry at digicool.com Fri Feb 2 19:50:45 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 13:50:45 -0500 Subject: [Python-Dev] Case sensitive import References: <14970.62313.653086.107554@beluga.mojam.com> Message-ID: <14971.389.284504.519600@anthem.wooz.org> >>>>> "SDM" == Steven D Majewski writes: SDM> I was going to suggest renaming the Python/ directory to SDM> Core/, but what happens when it tries to dump core ? Interpreter/ ?? 8-dot-3-ly y'rs, -Barry From barry at digicool.com Fri Feb 2 19:53:48 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 13:53:48 -0500 Subject: [Python-Dev] Case sensitive import. References: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> <14970.64979.584372.4671@anthem.wooz.org> Message-ID: <14971.572.369273.721571@anthem.wooz.org> >>>>> "BAW" == Barry A Warsaw writes: BAW> I'm tasked with look at your patch for 2.1a2, and I have some BAW> questions and issues (since I'm just spinning up on this). Steve, your patch is slightly broken for Linux (RH 6.1), which doesn't have a d_namelen slot in the struct dirent. I wormed around that by testing on #ifdef _DIRENT_HAVE_D_NAMLEN which appears to be the Linuxy way of determining the existance of this slot. If it's missing, I just strlen(dp->d_name). I'm doing a "make test" now and will test import of getpass to make sure it doesn't break on Linux. If it looks good, I'll upload a new version of the patch (which also contains consistent C style fixes) to SF and commit the patch for 2.1a2. -Barry From barry at digicool.com Fri Feb 2 20:05:40 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 14:05:40 -0500 Subject: [Python-Dev] Case sensitive import. References: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> <14970.64979.584372.4671@anthem.wooz.org> <14971.572.369273.721571@anthem.wooz.org> Message-ID: <14971.1284.474393.800832@anthem.wooz.org> Patch passes regr test and import getpass on Linux, so I'm prepared to commit it for 2.1a2. Y'all are going to have to stress test it on other platforms. -Barry From sdm7g at virginia.edu Fri Feb 2 21:23:29 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 2 Feb 2001 15:23:29 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14971.1284.474393.800832@anthem.wooz.org> Message-ID: On Fri, 2 Feb 2001, Barry A. Warsaw wrote: > Patch passes regr test and import getpass on Linux, so I'm prepared to > commit it for 2.1a2. Y'all are going to have to stress test it on > other platforms. Revised patch builds on macosx. 'make test' finds the same 4 unrelated errors it always gets on macosx, so it's not any worse than before. It passes my own test cases for this problem. ---| Steven D. Majewski (804-982-0831) |--- ---| Department of Molecular Physiology and Biological Physics |--- ---| University of Virginia Health Sciences Center |--- ---| P.O. Box 10011 Charlottesville, VA 22906-0011 |--- "All operating systems want to be unix, All programming languages want to be lisp." From barry at digicool.com Fri Feb 2 21:23:58 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 2 Feb 2001 15:23:58 -0500 Subject: [Python-Dev] Case sensitive import. References: <14971.1284.474393.800832@anthem.wooz.org> Message-ID: <14971.5982.164358.917299@anthem.wooz.org> Great, thanks Steve. Jeremy, go for it. -Barry From nas at arctrix.com Fri Feb 2 22:37:06 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Fri, 2 Feb 2001 13:37:06 -0800 Subject: [Python-Dev] Case sensitive import In-Reply-To: <14971.389.284504.519600@anthem.wooz.org>; from barry@digicool.com on Fri, Feb 02, 2001 at 01:50:45PM -0500 References: <14970.62313.653086.107554@beluga.mojam.com> <14971.389.284504.519600@anthem.wooz.org> Message-ID: <20010202133706.A29820@glacier.fnational.com> On Fri, Feb 02, 2001 at 01:50:45PM -0500, Barry A. Warsaw wrote: > > >>>>> "SDM" == Steven D Majewski writes: > > SDM> I was going to suggest renaming the Python/ directory to > SDM> Core/, but what happens when it tries to dump core ? > > Interpreter/ ?? If we do bite the bullet and make this change I vote for PyCore. Neil From sdm7g at virginia.edu Fri Feb 2 23:40:10 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 2 Feb 2001 17:40:10 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14970.64979.584372.4671@anthem.wooz.org> Message-ID: I don't have Cygwin either and what's more, I don't do much with MS-Windows, so I'm not familiar with some of the functions called in that patch. HFS+ filesystem on MacOSX is case preserving but case insensitive, which means that open("File") succeeds for any of: "file","File","FILE" ... The dirent functions verifies that there is in fact a "File" in that directory, and if not continues the search. There was some discussion about whether it should be #ifdef-ed diferently or more specifically. I don't know if any other system than macosx or Cygwin (if it works on that platform) require that test. (Although I'm glad you got it to compile on Linux, since the other likely case I can think of is LinuxPPC with a mac filesystem.) I guess if it compiles, then it doesn't hurt, except for the extra overhead. ( But, since it continues looking for a match, I couldn't use the CHECK_IMPORT_CASE switch. ) -- Steve On Fri, 2 Feb 2001, Barry A. Warsaw wrote: > First, what is the relationship of patch #103495 with the Cygwin patch > #103154? They look like they address similar issues. Would you say > that yours subsumes 103154, or at least will solve some of the > problems jlt63 talks about in his patch? > > The other problem is that I do not have a Cygwin system to test on, > and my wife isn't (yet :) psyched for me to do much debugging on her > Mac (which doesn't have MacOSX on it yet). The best I can do is make > sure your patch applies cleanly and doesn't break the Linux build. > Would that work for you for 2.1a2? From fredrik at effbot.org Fri Feb 2 21:49:47 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Fri, 2 Feb 2001 21:49:47 +0100 Subject: [Python-Dev] Diamond x Jungle Carpet Python References: <20010202072422.6B673F4DD@mail.python.org> <14970.62544.580964.817325@beluga.mojam.com> Message-ID: <00c401c08d5b$090ed040$e46940d5@hagrid> Skip wrote: > (Though the picture does show a lovely snake, I do believe you hit the wrong > mailing list with your offer. The only python's we deal with here are the > electronic programming language variety...) he's spammed every single python list, and many python "celebrities". I got a bunch this morning (I'm obviously using too many mail aliases), and have gotten several daily-URL contributions from people who thought it was cute when they saw the *first* copy... Cheers /F From skip at mojam.com Fri Feb 2 23:07:43 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 16:07:43 -0600 (CST) Subject: [Python-Dev] Waaay off topic, but I felt I had to share this... Message-ID: <14971.12207.566272.185258@beluga.mojam.com> Most of you know I have my feelers out looking for work. I've registered with a number of online job sites like Monster.com and Hotjobs.com. These sites allow you to set up "agents" that scan their database for new job postings that match your search criteria. Today I got an interesting "match" from Hotjobs.com's agent: ***Your Chicago Software agent yielded 1 jobs: 1. Vice President - Internet Technology Playboy Enterprises Inc. http://www.hotjobs.com/cgi-bin/job-show-mysql?J__PINDEX=J612497NR I wonder if they know something they're not telling me? Could it be that the chrome on my dome *is* actually a sign of virility? The job responsibilities sound interesting for someone about half my age: Research, design and direct the implementation of state-of-the-art applications and database technologies to support Playboy.com's products and services. I wonder how committed they are to research? Alas, they aren't looking for Python skills, so I'm not going to apply. Maybe I should hook them up with the guy selling snakes... Skip From skip at mojam.com Fri Feb 2 22:24:50 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 15:24:50 -0600 (CST) Subject: [Python-Dev] Case sensitive import In-Reply-To: References: <14970.62313.653086.107554@beluga.mojam.com> Message-ID: <14971.9634.992818.225516@beluga.mojam.com> Steve> I was going to suggest renaming the Python/ directory to Core/, Steve> but what happens when it tries to dump core ? PyCore? There was a thread on this recently, and Guido nixed the idea of renaming anything, but I can't remember what his rationale was. Something about breaking build instructions somewhere? Skip From jeremy at alum.mit.edu Sat Feb 3 00:39:51 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 2 Feb 2001 18:39:51 -0500 (EST) Subject: [Python-Dev] Python 2.1 alpha 2 released Message-ID: <14971.17735.263154.15769@w221.z064000254.bwi-md.dsl.cnc.net> While Guido was working the press circuit at the LinuxWorld Expo in New York City, the Python developers, including the many volunteers and the folks from PythonLabs, were busy finishing the second alpha release of Python 2.1. The release is currently available from SourceForge and will also be available from python.org later today. You can find the source release at: http://sourceforge.net/project/showfiles.php?group_id=5470 The Windows installer will be ready shortly. Fred Drake announced the documentation release earlier today. You can browse the new docs online at http://python.sourceforge.net/devel-docs/ or download them from ftp://ftp.python.org/pub/python/doc/2.1a2/ Please give it a good try! The only way Python 2.1 can become a rock-solid product is if people test the alpha releases. If you are using Python for demanding applications or on extreme platforms, we are particularly interested in hearing your feedback. Are you embedding Python or using threads? Please test your application using Python 2.1a2! Please submit all bug reports through SourceForge: http://sourceforge.net/bugs/?group_id=5470 Here's the NEWS file: What's New in Python 2.1 alpha 2? ================================= Core language, builtins, and interpreter - Scopes nest. If a name is used in a function or class, but is not local, the definition in the nearest enclosing function scope will be used. One consequence of this change is that lambda statements could reference variables in the namespaces where the lambda is defined. In some unusual cases, this change will break code. In all previous version of Python, names were resolved in exactly three namespaces -- the local namespace, the global namespace, and the builtin namespace. According to this old definition, if a function A is defined within a function B, the names bound in B are not visible in A. The new rules make names bound in B visible in A, unless A contains a name binding that hides the binding in B. Section 4.1 of the reference manual describes the new scoping rules in detail. The test script in Lib/test/test_scope.py demonstrates some of the effects of the change. The new rules will cause existing code to break if it defines nested functions where an outer function has local variables with the same name as globals or builtins used by the inner function. Example: def munge(str): def helper(x): return str(x) if type(str) != type(''): str = helper(str) return str.strip() Under the old rules, the name str in helper() is bound to the builtin function str(). Under the new rules, it will be bound to the argument named str and an error will occur when helper() is called. - The compiler will report a SyntaxError if "from ... import *" occurs in a function or class scope. The language reference has documented that this case is illegal, but the compiler never checked for it. The recent introduction of nested scope makes the meaning of this form of name binding ambiguous. In a future release, the compiler may allow this form when there is no possibility of ambiguity. - repr(string) is easier to read, now using hex escapes instead of octal, and using \t, \n and \r instead of \011, \012 and \015 (respectively): >>> "\texample \r\n" + chr(0) + chr(255) '\texample \r\n\x00\xff' # in 2.1 '\011example \015\012\000\377' # in 2.0 - Functions are now compared and hashed by identity, not by value, since the func_code attribute is writable. - Weak references (PEP 205) have been added. This involves a few changes in the core, an extension module (_weakref), and a Python module (weakref). The weakref module is the public interface. It includes support for "explicit" weak references, proxy objects, and mappings with weakly held values. - A 'continue' statement can now appear in a try block within the body of a loop. It is still not possible to use continue in a finally clause. Standard library - mailbox.py now has a new class, PortableUnixMailbox which is identical to UnixMailbox but uses a more portable scheme for determining From_ separators. Also, the constructors for all the classes in this module have a new optional `factory' argument, which is a callable used when new message classes must be instantiated by the next() method. - random.py is now self-contained, and offers all the functionality of the now-deprecated whrandom.py. See the docs for details. random.py also supports new functions getstate() and setstate(), for saving and restoring the internal state of the generator; and jumpahead(n), for quickly forcing the internal state to be the same as if n calls to random() had been made. The latter is particularly useful for multi- threaded programs, creating one instance of the random.Random() class for each thread, then using .jumpahead() to force each instance to use a non-overlapping segment of the full period. - random.py's seed() function is new. For bit-for-bit compatibility with prior releases, use the whseed function instead. The new seed function addresses two problems: (1) The old function couldn't produce more than about 2**24 distinct internal states; the new one about 2**45 (the best that can be done in the Wichmann-Hill generator). (2) The old function sometimes produced identical internal states when passed distinct integers, and there was no simple way to predict when that would happen; the new one guarantees to produce distinct internal states for all arguments in [0, 27814431486576L). - The socket module now supports raw packets on Linux. The socket family is AF_PACKET. - test_capi.py is a start at running tests of the Python C API. The tests are implemented by the new Modules/_testmodule.c. - A new extension module, _symtable, provides provisional access to the internal symbol table used by the Python compiler. A higher-level interface will be added on top of _symtable in a future release. Windows changes - Build procedure: the zlib project is built in a different way that ensures the zlib header files used can no longer get out of synch with the zlib binary used. See PCbuild\readme.txt for details. Your old zlib-related directories can be deleted; you'll need to download fresh source for zlib and unpack it into a new directory. - Build: New subproject _test for the benefit of test_capi.py (see above). - Build: subproject ucnhash is gone, since the code was folded into the unicodedata subproject. What's New in Python 2.1 alpha 1? ================================= Core language, builtins, and interpreter - There is a new Unicode companion to the PyObject_Str() API called PyObject_Unicode(). It behaves in the same way as the former, but assures that the returned value is an Unicode object (applying the usual coercion if necessary). - The comparison operators support "rich comparison overloading" (PEP 207). C extension types can provide a rich comparison function in the new tp_richcompare slot in the type object. The cmp() function and the C function PyObject_Compare() first try the new rich comparison operators before trying the old 3-way comparison. There is also a new C API PyObject_RichCompare() (which also falls back on the old 3-way comparison, but does not constrain the outcome of the rich comparison to a Boolean result). The rich comparison function takes two objects (at least one of which is guaranteed to have the type that provided the function) and an integer indicating the opcode, which can be Py_LT, Py_LE, Py_EQ, Py_NE, Py_GT, Py_GE (for <, <=, ==, !=, >, >=), and returns a Python object, which may be NotImplemented (in which case the tp_compare slot function is used as a fallback, if defined). Classes can overload individual comparison operators by defining one or more of the methods__lt__, __le__, __eq__, __ne__, __gt__, __ge__. There are no explicit "reflected argument" versions of these; instead, __lt__ and __gt__ are each other's reflection, likewise for__le__ and __ge__; __eq__ and __ne__ are their own reflection (similar at the C level). No other implications are made; in particular, Python does not assume that == is the Boolean inverse of !=, or that < is the Boolean inverse of >=. This makes it possible to define types with partial orderings. Classes or types that want to implement (in)equality tests but not the ordering operators (i.e. unordered types) should implement == and !=, and raise an error for the ordering operators. It is possible to define types whose rich comparison results are not Boolean; e.g. a matrix type might want to return a matrix of bits for A < B, giving elementwise comparisons. Such types should ensure that any interpretation of their value in a Boolean context raises an exception, e.g. by defining __nonzero__ (or the tp_nonzero slot at the C level) to always raise an exception. - Complex numbers use rich comparisons to define == and != but raise an exception for <, <=, > and >=. Unfortunately, this also means that cmp() of two complex numbers raises an exception when the two numbers differ. Since it is not mathematically meaningful to compare complex numbers except for equality, I hope that this doesn't break too much code. - Functions and methods now support getting and setting arbitrarily named attributes (PEP 232). Functions have a new __dict__ (a.k.a. func_dict) which hold the function attributes. Methods get and set attributes on their underlying im_func. It is a TypeError to set an attribute on a bound method. - The xrange() object implementation has been improved so that xrange(sys.maxint) can be used on 64-bit platforms. There's still a limitation that in this case len(xrange(sys.maxint)) can't be calculated, but the common idiom "for i in xrange(sys.maxint)" will work fine as long as the index i doesn't actually reach 2**31. (Python uses regular ints for sequence and string indices; fixing that is much more work.) - Two changes to from...import: 1) "from M import X" now works even if M is not a real module; it's basically a getattr() operation with AttributeError exceptions changed into ImportError. 2) "from M import *" now looks for M.__all__ to decide which names to import; if M.__all__ doesn't exist, it uses M.__dict__.keys() but filters out names starting with '_' as before. Whether or not __all__ exists, there's no restriction on the type of M. - File objects have a new method, xreadlines(). This is the fastest way to iterate over all lines in a file: for line in file.xreadlines(): ...do something to line... See the xreadlines module (mentioned below) for how to do this for other file-like objects. - Even if you don't use file.xreadlines(), you may expect a speedup on line-by-line input. The file.readline() method has been optimized quite a bit in platform-specific ways: on systems (like Linux) that support flockfile(), getc_unlocked(), and funlockfile(), those are used by default. On systems (like Windows) without getc_unlocked(), a complicated (but still thread-safe) method using fgets() is used by default. You can force use of the fgets() method by #define'ing USE_FGETS_IN_GETLINE at build time (it may be faster than getc_unlocked()). You can force fgets() not to be used by #define'ing DONT_USE_FGETS_IN_GETLINE (this is the first thing to try if std test test_bufio.py fails -- and let us know if it does!). - In addition, the fileinput module, while still slower than the other methods on most platforms, has been sped up too, by using file.readlines(sizehint). - Support for run-time warnings has been added, including a new command line option (-W) to specify the disposition of warnings. See the description of the warnings module below. - Extensive changes have been made to the coercion code. This mostly affects extension modules (which can now implement mixed-type numerical operators without having to use coercion), but occasionally, in boundary cases the coercion semantics have changed subtly. Since this was a terrible gray area of the language, this is considered an improvement. Also note that __rcmp__ is no longer supported -- instead of calling __rcmp__, __cmp__ is called with reflected arguments. - In connection with the coercion changes, a new built-in singleton object, NotImplemented is defined. This can be returned for operations that wish to indicate they are not implemented for a particular combination of arguments. From C, this is Py_NotImplemented. - The interpreter accepts now bytecode files on the command line even if they do not have a .pyc or .pyo extension. On Linux, after executing echo ':pyc:M::\x87\xc6\x0d\x0a::/usr/local/bin/python:' > /proc/sys/fs/binfmt_misc/register any byte code file can be used as an executable (i.e. as an argument to execve(2)). - %[xXo] formats of negative Python longs now produce a sign character. In 1.6 and earlier, they never produced a sign, and raised an error if the value of the long was too large to fit in a Python int. In 2.0, they produced a sign if and only if too large to fit in an int. This was inconsistent across platforms (because the size of an int varies across platforms), and inconsistent with hex() and oct(). Example: >>> "%x" % -0x42L '-42' # in 2.1 'ffffffbe' # in 2.0 and before, on 32-bit machines >>> hex(-0x42L) '-0x42L' # in all versions of Python The behavior of %d formats for negative Python longs remains the same as in 2.0 (although in 1.6 and before, they raised an error if the long didn't fit in a Python int). %u formats don't make sense for Python longs, but are allowed and treated the same as %d in 2.1. In 2.0, a negative long formatted via %u produced a sign if and only if too large to fit in an int. In 1.6 and earlier, a negative long formatted via %u raised an error if it was too big to fit in an int. - Dictionary objects have an odd new method, popitem(). This removes an arbitrary item from the dictionary and returns it (in the form of a (key, value) pair). This can be useful for algorithms that use a dictionary as a bag of "to do" items and repeatedly need to pick one item. Such algorithms normally end up running in quadratic time; using popitem() they can usually be made to run in linear time. Standard library - In the time module, the time argument to the functions strftime, localtime, gmtime, asctime and ctime is now optional, defaulting to the current time (in the local timezone). - The ftplib module now defaults to passive mode, which is deemed a more useful default given that clients are often inside firewalls these days. Note that this could break if ftplib is used to connect to a *server* that is inside a firewall, from outside; this is expected to be a very rare situation. To fix that, you can call ftp.set_pasv(0). - The module site now treats .pth files not only for path configuration, but also supports extensions to the initialization code: Lines starting with import are executed. - There's a new module, warnings, which implements a mechanism for issuing and filtering warnings. There are some new built-in exceptions that serve as warning categories, and a new command line option, -W, to control warnings (e.g. -Wi ignores all warnings, -We turns warnings into errors). warnings.warn(message[, category]) issues a warning message; this can also be called from C as PyErr_Warn(category, message). - A new module xreadlines was added. This exports a single factory function, xreadlines(). The intention is that this code is the absolutely fastest way to iterate over all lines in an open file(-like) object: import xreadlines for line in xreadlines.xreadlines(file): ...do something to line... This is equivalent to the previous the speed record holder using file.readlines(sizehint). Note that if file is a real file object (as opposed to a file-like object), this is equivalent: for line in file.xreadlines(): ...do something to line... - The bisect module has new functions bisect_left, insort_left, bisect_right and insort_right. The old names bisect and insort are now aliases for bisect_right and insort_right. XXX_right and XXX_left methods differ in what happens when the new element compares equal to one or more elements already in the list: the XXX_left methods insert to the left, the XXX_right methods to the right. Code that doesn't care where equal elements end up should continue to use the old, short names ("bisect" and "insort"). - The new curses.panel module wraps the panel library that forms part of SYSV curses and ncurses. Contributed by Thomas Gellekum. - The SocketServer module now sets the allow_reuse_address flag by default in the TCPServer class. - A new function, sys._getframe(), returns the stack frame pointer of the caller. This is intended only as a building block for higher-level mechanisms such as string interpolation. Build issues - For Unix (and Unix-compatible) builds, configuration and building of extension modules is now greatly automated. Rather than having to edit the Modules/Setup file to indicate which modules should be built and where their include files and libraries are, a distutils-based setup.py script now takes care of building most extension modules. All extension modules built this way are built as shared libraries. Only a few modules that must be linked statically are still listed in the Setup file; you won't need to edit their configuration. - Python should now build out of the box on Cygwin. If it doesn't, mail to Jason Tishler (jlt63 at users.sourceforge.net). - Python now always uses its own (renamed) implementation of getopt() -- there's too much variation among C library getopt() implementations. - C++ compilers are better supported; the CXX macro is always set to a C++ compiler if one is found. Windows changes - select module: By default under Windows, a select() call can specify no more than 64 sockets. Python now boosts this Microsoft default to 512. If you need even more than that, see the MS docs (you'll need to #define FD_SETSIZE and recompile Python from source). - Support for Windows 3.1, DOS and OS/2 is gone. The Lib/dos-8x3 subdirectory is no more! -- Jeremy Hylton From skip at mojam.com Sat Feb 3 02:10:11 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 19:10:11 -0600 (CST) Subject: [Python-Dev] linuxaudiodev crashes Message-ID: <14971.23155.335303.830239@beluga.mojam.com> I've been getting this for awhile on my laptop (Mandrake 7.1): test test_linuxaudiodev crashed -- linuxaudiodev.error: (11, 'Resource temporarily unavailable') RealPlayer works fine so I suspect the infrastructure is there and functioning. Anyone else seeing this? Skip From dkwolfe at pacbell.net Sat Feb 3 02:08:43 2001 From: dkwolfe at pacbell.net (Dan Wolfe) Date: Fri, 02 Feb 2001 17:08:43 -0800 Subject: [Python-Dev] Case sensitive import In-Reply-To: Message-ID: <0G8500859PMIQL@mta5.snfc21.pbi.net> It's been suggested (eg pyCore).... and shot down.... uhh, IIRC, due to "millions and millions of Python developers" (thanks Tim! ) who don't want to change their directory structures and the fact that nobody wanted to lose the CVS log files/do the clean up... Alas, we gonna go around and around until we either decide to bite the bullet and "just do it" or allow a multitude of hacks to be put in place to work around the issue... it-may-be-painful-once-but-it's-a-lot-less-painful-than-a-thousand- times'ly yours, - Dan On Friday, February 2, 2001, at 10:46 AM, Steven D. Majewski wrote: > On Fri, 2 Feb 2001, Skip Montanaro wrote: > >> Tim> It's still terrible style to *rely* on case-sensitivity in >> file >> Tim> names, and all such crap should be purged from the Python >> Tim> distribution regardless. >> >> Then the Python directory or the python executable should be >> renamed. I >> sense some deja vu here... >> >> anyone-for-a.out?-ly y'rs, > > > I was going to suggest renaming the Python/ directory to Core/, > but what happens when it tries to dump core ? > > -- Steve From skip at mojam.com Sat Feb 3 03:09:45 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 2 Feb 2001 20:09:45 -0600 (CST) Subject: [Python-Dev] Setup.local is getting zapped Message-ID: <14971.26729.54529.333522@beluga.mojam.com> Modules/Setup.local is getting zapped by some aspect of the build process. Not sure by what step, but mine had lines I added to it a few days ago, and nothing now. It should be treated as Modules/Setup used to be: initialize it if it's absent, don't touch it if it's there. The distclean target looks like the culprit: distclean: clobber -rm -f Makefile Makefile.pre buildno config.status config.log \ config.cache config.h setup.cfg Modules/config.c \ Modules/Setup Modules/Setup.local Modules/Setup.config I've been using it a lot lately to build from scratch, what with the new Makefile and setup.py. Since Setup.local is ostensibly something a user would edit manually and would never have useful content in it as distributed, I don't think even distclean should zap it. Skip From guido at digicool.com Sat Feb 3 03:21:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 02 Feb 2001 21:21:11 -0500 Subject: [Python-Dev] 2.1a2 released Message-ID: <200102030221.VAA09351@cj20424-a.reston1.va.home.com> I noticed that the source tarball and Windows installer were in place on SF and ftp.python.org, so I've updated the webpages python.org and python.org/2.1/. Seems email is wedged again so I don't know when people will get this email and if there was something to wait for -- I presume not. I'll mail an official announcement out tomorrow. Going to bed now...! --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Sat Feb 3 03:25:28 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 2 Feb 2001 21:25:28 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <20010130092454.D18319@glacier.fnational.com>; from nas@arctrix.com on Tue, Jan 30, 2001 at 09:24:54AM -0800 References: <200101300206.VAA21925@cj20424-a.reston1.va.home.com> <20010130092454.D18319@glacier.fnational.com> Message-ID: <20010202212528.D27105@thyrsus.com> Neil Schemenauer : > [Tim Peters on adding yet more syntatic sugar] > > Available time is finite, and this isn't at the top of the list > > of things I'd like to see (resuming the discussion of > > generators + coroutines + iteration protocol comes to mind > > first). > > What's the chances of getting generators into 2.2? The > implementation should not be hard. Didn't Steven Majewski have > something years ago? Why do we always get sidetracked on trying > to figure out how to do coroutines and continuations? > > Generators would add real power to the language and are simple > enough that most users could benefit from them. Also, it should be > possible to design an interface that does not preclude the > addition of coroutines or continuations later. I agree. I think this is a very importand growth direction for the language. -- Eric S. Raymond The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary. -- H.L. Mencken From tim.one at home.com Sat Feb 3 04:38:42 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 2 Feb 2001 22:38:42 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: Message-ID: [Steven D. Majewski] > HFS+ filesystem on MacOSX is case preserving but case insensitive, Same as Windows. > which means that open("File") succeeds for any of: > "file","File","FILE" ... Ditto. > The dirent functions verifies that there is in fact a "File" in > that directory, and if not continues the search. Which is what Jeremy said Guido is "strongly opposed to": His explanation is that imports ought to work whether all the there are multiple directories on sys.path or all the files are copied into a single directory. Obviously on file systems like HFS+, it would be impossible to have FCNTL.py and fcntl.py be in the same directory. In effect, you wrote your own check_case under a different name that-- unlike all other versions of check_case --ignores case mismatches. As I said before, I don't care for the current rules (and find_module has become such an #ifdef'ed minefield I'm not sure it's possible to tell what it does *anywhere* anymore), but there's no difference here between Windows filesystems and HFS+, so for the sake of basic sanity they must work the same way. So a retroactive -1 on this last-second patch -- and a waaaaay retroactive -1 on Python's behavior on Windows too. From Jason.Tishler at dothill.com Sat Feb 3 04:14:58 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Fri, 2 Feb 2001 22:14:58 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14971.1284.474393.800832@anthem.wooz.org>; from barry@digicool.com on Fri, Feb 02, 2001 at 02:05:40PM -0500 References: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> <14970.64979.584372.4671@anthem.wooz.org> <14971.572.369273.721571@anthem.wooz.org> <14971.1284.474393.800832@anthem.wooz.org> Message-ID: <20010202221458.M1800@dothill.com> On Fri, Feb 02, 2001 at 02:05:40PM -0500, Barry A. Warsaw wrote: > Patch passes regr test and import getpass on Linux, so I'm prepared to > commit it for 2.1a2. Y'all are going to have to stress test it on > other platforms. [Sorry for chiming in late, but my family just had its own beta release... :,)] I will test this on Cygwin ASAP and report back to the list. I really appreciate the inclusion of this patch in 2.1a2. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From tim.one at home.com Sat Feb 3 06:11:11 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 00:11:11 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <3A788E96.AB823FAE@lemburg.com> Message-ID: [MAL] > ... > Since iterators can define the order in which a data structure is > traversed, this would also do away with the second (supposed) > problem. Except we don't need iterators to do that. If anyone thought it was important, they could change the existing PyDict_Next to force an ordering, and then everything building on that would inherit it. So while I'm in favor of better iteration schemes, I'm not in favor of overselling them (on grounds that aren't unique to them). >> Sorry, but immutability has nothing to do with thread safety ... > Who said that an exception is raised ? I did . > The method I posted on the mutability thread allows querying > the current state just like you would query the availability > of a resource. This? .mutable([flag]) -> integer If called without argument, returns 1/0 depending on whether the object is mutable or not. When called with a flag argument, sets the mutable state of the object to the value indicated by flag and returns the previous flag state. If I do: if object.mutable(): object.mutate() in a threaded world, the certain (but erratic) outcome is that sometimes it blows up: there's no guarantee that another thread doesn't sneak in and *change* the mutability between the time object.mutable() returns 1 and object.mutate() acts on a bad assumption. Same thing for: if resources.num_printers_available() > 0: action_that_blows_up_if_no_printers_are_available in a threaded world. It's possible to build a thread-safe resource acquisition protocol in either case, but that's really got nothing to do with immutability or iterators (marking a thing immutable doesn't do any good there unless you *also* build a protocol on top of it for communicating state changes, blocking until one occurs, notifications with optional timeouts, etc -- just doing object.mutable(1) is a threaded disaster in the absence of a higher-level protocol guaranteeing that this won't go changing the mutability state in the middle of some other thread's belief that it's got the thing frozen; likewise for object.mutable(0) not stepping on some other thread's belief that it's got permission to mutate). .mutable(flag) is *fine* for what it does, it's simply got nothing to do with threads. Thread safety could *build* on it via coordinated use of a threading.Sempahore (or moral equivalent), though. From tim.one at home.com Sat Feb 3 06:42:06 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 00:42:06 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <14968.37210.886842.820413@beluga.mojam.com> Message-ID: [Skip Montanaro] > The problem that rolls around in the back of my mind from time-to-time > is that since Python doesn't currently support interfaces, checking > for specific methods seems to be the only reasonable way to determine > if a object does what you want or not. Except that-- alas! --"what I want" is almost always for it to respond to some specific methods. For example, I don't believe I've *ever* written a class that responds to all the "number" methods (in particular, I'm almost certain not to bother implementing a notion of "shift"). It's also rare for me to define a class that implements all the "sequence" or "mapping" methods. So if we had a Interface.Sequence, all my code would still check for individual sequence operations anyway. Take it to the extreme, and each method becomes an Interface unto itself, which then get grouped into collections in different ways by different people, and in the end I *still* check for specific methods rather than fight with umpteen competing hierarchies. The most interesting "interfaces" to me are things like EuclideanDomain: a set of guarantees about how methods *interact*, and almost nothing to do with which methods a thing supports. A simpler example is TotalOrdering: there is no method unique to total orderings, instead it's a guarantee about how cmp *behaves*. If you want know whether an object x supports slicing, *trying* x[:0] is as direct as it gets. You just hope that x isn't an instance of class Human: def __getslice__(self, lo, hi): """Return a list of activities planned for human self. lo and hi bound the timespan of activities to be returned, in seconds from the epoch. If lo is less than the birthdate of self, treat lo as if it were self's birthdate. If hi is greater than the expected lifetime of self, treat hi as if it were the expected lifetime of self, but also send an execution order to ensure that self does not live beyond that time (this may seem drastic, but the alternative was complaints from customers who exceeded their expected lifetimes, and then demanded to know why "the stupid software" cut off their calendars "early" -- hey, we'll implement infinite memory when humans are immortal). """ don't-think-it-hasn't-happened -ly y'rs - tim From tim.one at home.com Sat Feb 3 07:46:08 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 01:46:08 -0500 Subject: [Python-Dev] Case sensitive import In-Reply-To: <0G8500859PMIQL@mta5.snfc21.pbi.net> Message-ID: [Dan Wolfe] > It's been suggested (eg pyCore).... and shot down.... uhh, IIRC, due > to "millions and millions of Python developers" (thanks Tim! ) > who don't want to change their directory structures and the fact that > nobody wanted to lose the CVS log files/do the clean up... Don't thank me, thank Bill Gates for creating a wonderful operating system where I get to ignore *all* the 57-varieties-of-Unix build headaches. That's the only reason I'm so cheerful about supporting unbounded damage to everyone else. But, it's a good reason . BTW, I didn't grok the CVS argument. You don't change the name of the directory, you change the name of the executable. And the obvious name is obvious to me: python.exe . no-need-to-rewrite-history-ly y'rs - tim From tim.one at home.com Sat Feb 3 07:53:53 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 01:53:53 -0500 Subject: [Python-Dev] Generalized "from M. import X" was RE: Python 2.1 alpha 2 released) In-Reply-To: Message-ID: I'm trying to *use* each new feature at least once. It looks like a multiday project . I remember reading the discussion about this one: [from (old!) NEWS] > ... > - Two changes to from...import: > > 1) "from M import X" now works even if M is not a real module; it's > basically a getattr() operation with AttributeError exceptions > changed into ImportError. but in practice it turns out I have no idea what it means. For example, >>> alist = [] >>> hasattr(alist, "sort") 1 >>> from alist import sort Traceback (most recent call last): File " ", line 1, in ? ImportError: No module named alist >>> No, I don't want to *do* that, but the description above makes me wonder what I'm missing. Or, something I *might* want to do (maybe, on my worst day, and on any other day I'd agree I should be shot for even considering it): class Random: def random(self): pass def seed(self): pass def betavariate(self): pass # etc etc _inst = Random() from _inst import random, seed, betavariate # etc, etc Again complains that there's no module named "_inst". So if M does not in fact need to be a real module, what *does* it need to be? Ah: sticking in sys.modules["alist"] = alist first does the (disgusting) trick. So, next gripe: I don't see anything about this in the 2.1a2 docs, although the Lang Ref's section on "the import statement" has always been vague enough to allow it. The missing piece: when the Lang Ref says something is "implementation and platform specific", where does one go to find out what the deal is for your particular implementation and platform? guess-not-to-NEWS -ly y'rs - tim From moshez at zadka.site.co.il Sat Feb 3 08:12:44 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Sat, 3 Feb 2001 09:12:44 +0200 (IST) Subject: [Python-Dev] Generalized "from M. import X" was RE: Python 2.1 alpha 2 released) In-Reply-To: References: Message-ID: <20010203071244.A1598A841@darjeeling.zadka.site.co.il> On Sat, 3 Feb 2001 01:53:53 -0500, "Tim Peters" wrote: > >>> alist = [] > >>> hasattr(alist, "sort") > 1 > >>> from alist import sort > Traceback (most recent call last): > File " ", line 1, in ? > ImportError: No module named alist > >>> Tim, don't you remember to c.l.py discussions? >>> class Foo: ... pass ... >>> foo=Foo() >>> foo.foo = 'foo' >>> import sys >>> sys.modules['foo'] = foo >>> import foo >>> print foo.foo foo >>> from foo import foo >>> print foo foo >>> -- Moshe Zadka This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From tim.one at home.com Sat Feb 3 08:42:05 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 02:42:05 -0500 Subject: [Python-Dev] Generalized "from M. import X" was RE: Python 2.1 alpha 2 released) In-Reply-To: <20010203071244.A1598A841@darjeeling.zadka.site.co.il> Message-ID: [Moshe Zadka] > Tim, don't you remember to c.l.py discussions? Unclear whether I don't remember or haven't read them yet: I've got a bit over 800 unread msgs piled up from the last week! About 500 of them showed up since I awoke on Friday. The combo of python.org mail screwups and my ISP's mail screwups is making email life hell lately. and-misery-loves-company -ly y'rs - tim From tim.one at home.com Sat Feb 3 09:17:20 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 03:17:20 -0500 Subject: [Python-Dev] Perverse nesting bug Message-ID: SF bug reporting is still impossible. Little program: def f(): print "outer f.a is", f.a def f(): print "inner f.a is", f.a f.a = 666 f() f.a = 42 f() I'm not sure what I expected it to do, but most likely an UnboundLocalError (the local f hasn't been bound to yet at the time "print outer" executes). In reality it prints: outer f.a is and then blows up with a null-pointer dereference, here: case LOAD_DEREF: x = freevars[oparg]; w = PyCell_Get(x); Py_INCREF(w); /***** THIS IS THE GUY *****/ PUSH(w); break; Simpler program with same symptom: def f(): print "outer f.a is", f.a def f(): print "inner f.a is", f.a f() I *do* get an UnboundLocalError if the body of the inner "f" is changed to "pass": def f(): # this one works fine! i.e., UnboundLocalError print "outer f.a is", f.a def f(): pass f() You'll also be happy to know that this one prints 666 twice (as it should): def f(): def f(): print "inner f.a is", f.a f.a = 666 f() print "outer f.a is", f.a f.a = 42 f() python-gets-simpler-each-release -ly y'rs - tim From tim.one at home.com Sat Feb 3 09:48:01 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 03:48:01 -0500 Subject: [Python-Dev] Waaay off topic, but I felt I had to share this... In-Reply-To: <14971.12207.566272.185258@beluga.mojam.com> Message-ID: [Skip Montanaro, whose ship has finally come in!] > ... > Today I got an interesting "match" from Hotjobs.com's agent: > > ***Your Chicago Software agent yielded 1 jobs: > > 1. Vice President - Internet Technology > Playboy Enterprises Inc. > http://www.hotjobs.com/cgi-bin/job-show-mysql?J__PINDEX=J612497NR > ... > I wonder how committed they are to research? Go for it! All communication technologies are driven by the need for delivering porn (you surely don't think Ford Motor Company funded streaming media research <0.7 link>). This inspired me to look at http://www.playboy.com/. A very fancy, media-rich website, that appears to have been coded by hand in Notepad by monkeys -- but monkeys with an inate sense of Pythonic indentation: // this is browser detect thingy browser=0 if(document.images) { browser=1; } // this is status message function function stat(words) { if(browser==1) { top.window.status=words; } } It's possible that they're not beyond hope, although they seem to think that horizontal space is precious while vertical abundant. > Alas, they aren't looking for Python skills, ... Only because they haven't met you! Guido would surely love to see "Python Powered" on a soft-core porn portal . send-python-dev-the-cyber-club-password-after-you-start-ly y'rs - tim From mwh21 at cam.ac.uk Sat Feb 3 10:51:16 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 03 Feb 2001 09:51:16 +0000 Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: Skip Montanaro's message of "Fri, 2 Feb 2001 20:09:45 -0600 (CST)" References: <14971.26729.54529.333522@beluga.mojam.com> Message-ID: Skip Montanaro writes: > Modules/Setup.local is getting zapped by some aspect of the build process. > Not sure by what step, but mine had lines I added to it a few days ago, and > nothing now. It should be treated as Modules/Setup used to be: initialize > it if it's absent, don't touch it if it's there. > > The distclean target looks like the culprit: > > distclean: clobber > -rm -f Makefile Makefile.pre buildno config.status config.log \ > config.cache config.h setup.cfg Modules/config.c \ > Modules/Setup Modules/Setup.local Modules/Setup.config > > I've been using it a lot lately to build from scratch, what with the new > Makefile and setup.py. Since Setup.local is ostensibly something a user > would edit manually and would never have useful content in it as > distributed, I don't think even distclean should zap it. Eh? Surely "make distclean" is what you invoke before you tar up the src directory of a release, and so certainly should remove Setup.local. To do builds from scratch easily do things like: $ cd python/dist/src $ mkdir build $ cd build $ ../configure && make and then blow away the ./build directory as needed. This still tends to leave .pycs in Lib if you run make test, so I tend to use lndir to acheive a similar effect. Cheers, M. PS: Good sigmonster. -- 6. Symmetry is a complexity-reducing concept (co-routines include subroutines); seek it everywhere. -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html From tim.one at home.com Sat Feb 3 11:44:35 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 05:44:35 -0500 Subject: [Python-Dev] insertdict slower? In-Reply-To: <14970.55362.332519.654243@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: [Jeremy Hylton] > I wanted to be sure that some other change to the dictionary code > didn't have the unintended consequence of slowing down insertdict. Have you looked at insertdict? Again, nothing has changed in it since 2.0, and it's a simple little function anyway. Here it is in its entirety: static void insertdict(register dictobject *mp, PyObject *key, long hash, PyObject *value) { PyObject *old_value; register dictentry *ep; ep = (mp->ma_lookup)(mp, key, hash); if (ep->me_value != NULL) { old_value = ep->me_value; ep->me_value = value; Py_DECREF(old_value); /* which **CAN** re-enter */ Py_DECREF(key); } else { if (ep->me_key == NULL) mp->ma_fill++; else Py_DECREF(ep->me_key); ep->me_key = key; ep->me_hash = hash; ep->me_value = value; mp->ma_used++; } } There's not even a loop. Unless Py_DECREF got a lot slower, there's nothing at all time-consuming in inserdict proper. > There is a real and measurable slowdown in MAL's DictCreation > microbenchmark, which needs to be explained somehow. insertdict > sounds more plausible than many other explanations. Given the code above, and that it hasn't changed at all, do you still think it's plausible? At this point I can only repeat my last msg: perhaps your profiler is mistakenly charging the time for this line: ep = (mp->ma_lookup)(mp, key, hash); to insertdict; perhaps the profiler is plain buggy; perhaps you didn't measure what you think you did; perhaps it's an accidental I-cache conflict; all *I* can be sure of is that it's not due to any change in insertdict . try-the-icache-trick-you-may-get-lucky-ly y'rs - tim From mal at lemburg.com Sat Feb 3 12:03:46 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 03 Feb 2001 12:03:46 +0100 Subject: [Python-Dev] insertdict slower? References: Message-ID: <3A7BE592.872AE4C1@lemburg.com> Tim Peters wrote: > > [Jeremy Hylton] > > I wanted to be sure that some other change to the dictionary code > > didn't have the unintended consequence of slowing down insertdict. > > Have you looked at insertdict? Again, nothing has changed in it since 2.0, > and it's a simple little function anyway. Here it is in its entirety: > > static void > insertdict(register dictobject *mp, PyObject *key, long hash, PyObject > *value) > { > PyObject *old_value; > register dictentry *ep; > ep = (mp->ma_lookup)(mp, key, hash); > if (ep->me_value != NULL) { > old_value = ep->me_value; > ep->me_value = value; > Py_DECREF(old_value); /* which **CAN** re-enter */ > Py_DECREF(key); > } > else { > if (ep->me_key == NULL) > mp->ma_fill++; > else > Py_DECREF(ep->me_key); > ep->me_key = key; > ep->me_hash = hash; > ep->me_value = value; > mp->ma_used++; > } > } > > There's not even a loop. Unless Py_DECREF got a lot slower, there's nothing > at all time-consuming in inserdict proper. > > > There is a real and measurable slowdown in MAL's DictCreation > > microbenchmark, which needs to be explained somehow. insertdict > > sounds more plausible than many other explanations. > > Given the code above, and that it hasn't changed at all, do you still think > it's plausible? At this point I can only repeat my last msg: perhaps your > profiler is mistakenly charging the time for this line: > > ep = (mp->ma_lookup)(mp, key, hash); > > to insertdict; perhaps the profiler is plain buggy; perhaps you didn't > measure what you think you did; perhaps it's an accidental I-cache conflict; > all *I* can be sure of is that it's not due to any change in insertdict > . It doesn't have anything to do with icache conflicts or other esoteric magic at dye-level. The reason for the slowdown is that the benchmark uses integers as keys and these have to go through the whole rich compare machinery to find their way into the dictionary. Please see my other post on the subject -- I think we need an optimized API especially for checking for equality. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Sat Feb 3 12:13:43 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 03 Feb 2001 12:13:43 +0100 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include References: Message-ID: <3A7BE7E7.5AA90731@lemburg.com> Tim Peters wrote: > > [MAL] > > ... > > Since iterators can define the order in which a data structure is > > traversed, this would also do away with the second (supposed) > > problem. > > Except we don't need iterators to do that. If anyone thought it was > important, they could change the existing PyDict_Next to force an ordering, > and then everything building on that would inherit it. So while I'm in > favor of better iteration schemes, I'm not in favor of overselling them (on > grounds that aren't unique to them). I'm just trying to sell iterators to bare us the pain of overloading the for-loop syntax just to get faster iteration over dictionaries. The idea is simple: put all the lookup, order and item building code into the iterator, have many of them, one for each flavour of values, keys, items and honeyloops, and then optimize the for-loop/iterator interaction to get the best performance out of them. There's really not much use in adding *one* special case to for-loops when there are a gazillion different needs to iterate over data structures, files, socket, ports, coffee cups, etc. > >> Sorry, but immutability has nothing to do with thread safety ... > > > Who said that an exception is raised ? > > I did . > > > The method I posted on the mutability thread allows querying > > the current state just like you would query the availability > > of a resource. > > This? > > .mutable([flag]) -> integer > > If called without argument, returns 1/0 depending on > whether the object is mutable or not. When called with a > flag argument, sets the mutable state of the object to > the value indicated by flag and returns the previous flag > state. > > If I do: > > if object.mutable(): > object.mutate() > > in a threaded world, the certain (but erratic) outcome is that sometimes it > blows up: there's no guarantee that another thread doesn't sneak in and > *change* the mutability between the time object.mutable() returns 1 and > object.mutate() acts on a bad assumption. I know. That's why you would do this: lock = [] # we use the mutable state as lock indicator; initial state is mutable # try to acquire lock: while 1: prevstate = lock.mutable(0) if prevstate == 0: # was already locked continue elif prevstate == 1: # we acquired the lock break # release lock lock.mutable(1) > Same thing for: > > if resources.num_printers_available() > 0: > action_that_blows_up_if_no_printers_are_available > > in a threaded world. It's possible to build a thread-safe resource > acquisition protocol in either case, but that's really got nothing to do > with immutability or iterators (marking a thing immutable doesn't do any > good there unless you *also* build a protocol on top of it for communicating > state changes, blocking until one occurs, notifications with optional > timeouts, etc -- just doing object.mutable(1) is a threaded disaster in the > absence of a higher-level protocol guaranteeing that this won't go changing > the mutability state in the middle of some other thread's belief that it's > got the thing frozen; likewise for object.mutable(0) not stepping on some > other thread's belief that it's got permission to mutate). > > .mutable(flag) is *fine* for what it does, it's simply got nothing to do > with threads. Thread safety could *build* on it via coordinated use of a > threading.Sempahore (or moral equivalent), though. Ok... :) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Sat Feb 3 12:57:02 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 06:57:02 -0500 Subject: [Python-Dev] insertdict slower? In-Reply-To: <3A7BE592.872AE4C1@lemburg.com> Message-ID: [MAL] > It doesn't have anything to do with icache conflicts or > other esoteric magic at dye-level. The reason for the slowdown > is that the benchmark uses integers as keys and these have to > go through the whole rich compare machinery to find their way into > the dictionary. But insertdict doesn't do any compares at all (besides C pointer comparison to NULL). There's more than one mystery here. The one I was addressing is why the profiler said *insertdict* got slower. Jeremy's profile did not give any reason to suspect that lookdict got slower (which is where the compares are); to the contrary, it said lookdict got 4.5% *faster* in 2.1. > Please see my other post on the subject -- I think we need > an optimized API especially for checking for equality. Quite possibly, but if Jeremy remains keen to help with investigating timing puzzles, he needs to figure out why his profiling approach is pointing him at the wrong functions. That has long-term value far beyond patching the regression du jour. it's-not-either/or-it's-both-ly y'rs -tim From mal at lemburg.com Sat Feb 3 13:23:54 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 03 Feb 2001 13:23:54 +0100 Subject: [Python-Dev] insertdict slower? References: Message-ID: <3A7BF85A.FDCC7854@lemburg.com> Tim Peters wrote: > > [MAL] > > It doesn't have anything to do with icache conflicts or > > other esoteric magic at dye-level. The reason for the slowdown > > is that the benchmark uses integers as keys and these have to > > go through the whole rich compare machinery to find their way into > > the dictionary. > > But insertdict doesn't do any compares at all (besides C pointer comparison > to NULL). There's more than one mystery here. The one I was addressing is > why the profiler said *insertdict* got slower. Jeremy's profile did not > give any reason to suspect that lookdict got slower (which is where the > compares are); to the contrary, it said lookdict got 4.5% *faster* in 2.1. > > > Please see my other post on the subject -- I think we need > > an optimized API especially for checking for equality. > > Quite possibly, but if Jeremy remains keen to help with investigating timing > puzzles, he needs to figure out why his profiling approach is pointing him > at the wrong functions. That has long-term value far beyond patching the > regression du jour. > > it's-not-either/or-it's-both-ly y'rs -tim Ok, let's agree on "it's both" :) I was referring to the slowdown which shows up in the DictCreation benchmark. It uses bundles of these operations to check for dictionary creation speed: d1 = {} d2 = {} d3 = {} d4 = {} d5 = {} d1 = {1:2,3:4,5:6} d2 = {2:3,4:5,6:7} d3 = {3:4,5:6,7:8} d4 = {4:5,6:7,8:9} d5 = {6:7,8:9,10:11} Note that the number of inserted items is 3; the minimum size of the allocated table is 4. Apart from the initial allocation of the dictionary table, no further resizes are done. One of the micro-optimizations which I used in my patched Python version deals with these rather common situations: small dictionaries are inlined (up to a certain size) in the object itself rather than stored in a separatly malloced table. I found that a limit of 8 slots gives you the best ratio between performance boost and memory overhead. This is another area where Valdimir's pymalloc could help, since it favours small memory chunks. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Sat Feb 3 14:15:17 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 08:15:17 -0500 Subject: [Python-Dev] insertdict slower? In-Reply-To: Message-ID: [Tim] > ... to the contrary, it said lookdict got 4.5% *faster* in 2.1. Ack, I was reading the wrong column. It actually said that lookdict went from 0.48 to 0.49 seconds, while insertdict went from 0.20 to 0.26. http://mail.python.org/pipermail/python-dev/2001-February/012428.html Whatever, the profile isn't pointing at things that make sense, and is pointing at things that don't. Then again, why anyone would believe any output from a computer program is beyond me . needs-sleep-ly y'rs - tim PS: Sorry to say it, but rich comparisons have nothing to do with this either! Run your dict creation test under a debugger and watch it -- the rich compares never get called. The basic reason is that hash(i) == i for all Python ints i (except for -1, but you're not using that). So the hash codes in your dict creation test are never equal. But there's never a reason to call a "real compare" unless you hit a case where the hash codes *are* equal. The latter never happens, so neither does the former. The insert either finds an empty slot at once (& so returns immediately), or collides. But in the latter case, as soon as it sees that ep->me_hash != hash, it just moves on the next slot in the probe sequence; and so until it does find an empty slot. From mal at lemburg.com Sat Feb 3 14:47:20 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 03 Feb 2001 14:47:20 +0100 Subject: [Python-Dev] insertdict slower? References: Message-ID: <3A7C0BE8.A0109F5D@lemburg.com> Tim Peters wrote: > > [Tim] > > ... to the contrary, it said lookdict got 4.5% *faster* in 2.1. > > Ack, I was reading the wrong column. It actually said that lookdict went > from 0.48 to 0.49 seconds, while insertdict went from 0.20 to 0.26. > > http://mail.python.org/pipermail/python-dev/2001-February/012428.html > > Whatever, the profile isn't pointing at things that make sense, and is > pointing at things that don't. > > Then again, why anyone would believe any output from a computer program is > beyond me . Looks like Jeremy's machine has a problem or this is the result of different compiler optimizations. On my machine using the same compiler and optimization settings I get the following figure for DictCreation (2.1a1 vs. 2.0): DictCreation: 1869.35 ms 12.46 us +8.77% That's below noise level (+/-10%). > needs-sleep-ly y'rs - tim > > PS: Sorry to say it, but rich comparisons have nothing to do with this > either! Run your dict creation test under a debugger and watch it -- the > rich compares never get called. The basic reason is that hash(i) == i for > all Python ints i (except for -1, but you're not using that). So the hash > codes in your dict creation test are never equal. But there's never a > reason to call a "real compare" unless you hit a case where the hash codes > *are* equal. The latter never happens, so neither does the former. The > insert either finds an empty slot at once (& so returns immediately), or > collides. But in the latter case, as soon as it sees that ep->me_hash != > hash, it just moves on the next slot in the probe sequence; and so until it > does find an empty slot. Hmm, seemed like a natural explanation from looking at the code. So now we have two different explanations for a non-existing problem ;-) I'll rerun the benchmark for 2.1a2 and let you know of the findings. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From skip at mojam.com Sat Feb 3 16:04:08 2001 From: skip at mojam.com (Skip Montanaro) Date: Sat, 3 Feb 2001 09:04:08 -0600 (CST) Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: References: <14971.26729.54529.333522@beluga.mojam.com> Message-ID: <14972.7656.829356.566021@beluga.mojam.com> Michael> Eh? Surely "make distclean" is what you invoke before you tar Michael> up the src directory of a release, and so certainly should Michael> remove Setup.local. Yeah, I realize that now. I should probably have been executing make clobber. Michael> This still tends to leave .pycs in Lib if you run make test, so Michael> I tend to use lndir to acheive a similar effect. Make distclean doesn't remove the pyc's or Emacs backup files. Those omissions seem to be a bug. Makefile-meister Neal? Skip From barry at digicool.com Sat Feb 3 16:50:33 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Sat, 3 Feb 2001 10:50:33 -0500 Subject: [Python-Dev] Case sensitive import References: <0G8500859PMIQL@mta5.snfc21.pbi.net> Message-ID: <14972.10441.479316.919937@anthem.wooz.org> >>>>> "TP" == Tim Peters writes: TP> Don't thank me, thank Bill Gates for creating a wonderful TP> operating system where I get to ignore *all* the TP> 57-varieties-of-Unix build headaches. And thank goodness for Un*x, where I get to ignore all the 69 different varieties of The One True Operating System -- all Windows OSes are created equal, right? :) TP> BTW, I didn't grok the CVS argument. You don't change the TP> name of the directory, you change the name of the executable. TP> And the obvious name is obvious to me: python.exe . Even a Un*x dweeb like myself would agree, if you have to change one of them, change the executable. I see no reason why on Un*x the build procedure couldn't drop a symlink from python.exe to python for backwards compatibility and convenience. -Barry From barry at digicool.com Sat Feb 3 16:55:38 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Sat, 3 Feb 2001 10:55:38 -0500 Subject: [Python-Dev] Case sensitive import. References: Message-ID: <14972.10746.34425.26722@anthem.wooz.org> >>>>> "TP" == Tim Peters writes: TP> So a retroactive -1 on this last-second patch -- and a waaaaay TP> retroactive -1 on Python's behavior on Windows too. So, let's tease out what the Right solution would be, and then see how close or if we can get there for 2.1. I've no clue what behavior Mac and Windows users would /like/ to see -- what would be most natural for them? OTOH, I like the Un*x behavior and I think I'd want to see platforms like Cygwin and MacOSX-on-non-HFS+ get as close to that as possible. Is it better to have uniform behavior across all platforms (modulo places like some Windows network fs's where that may not be possible)? Should Python's import semantics be identical across all platforms? OTOH, this is where the rubber meets the road so to speak, so some incompatibilities may be impossible to avoid. And what about Jython? -Barry From Jason.Tishler at dothill.com Sat Feb 3 17:02:58 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Sat, 3 Feb 2001 11:02:58 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14971.1284.474393.800832@anthem.wooz.org>; from barry@digicool.com on Fri, Feb 02, 2001 at 02:05:40PM -0500 References: <14970.59755.154176.579551@w221.z064000254.bwi-md.dsl.cnc.net> <14970.64979.584372.4671@anthem.wooz.org> <14971.572.369273.721571@anthem.wooz.org> <14971.1284.474393.800832@anthem.wooz.org> Message-ID: <20010203110258.N1800@dothill.com> Barry, On Fri, Feb 02, 2001 at 02:05:40PM -0500, Barry A. Warsaw wrote: > Patch passes regr test and import getpass on Linux, so I'm prepared to > commit it for 2.1a2. Y'all are going to have to stress test it on > other platforms. This patch works properly under Cygwin too. The regression tests yield the same results as before and "import getpass" now behaves the same as on UNIX. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From fredrik at effbot.org Sat Feb 3 17:07:24 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Sat, 3 Feb 2001 17:07:24 +0100 Subject: [Python-Dev] Case sensitive import References: <0G8500859PMIQL@mta5.snfc21.pbi.net> <14972.10441.479316.919937@anthem.wooz.org> Message-ID: <001201c08dfb$668f9f10$e46940d5@hagrid> barry wrote: > Even a Un*x dweeb like myself would agree, if you have to change one > of them, change the executable. I see no reason why on Un*x the build > procedure couldn't drop a symlink from python.exe to python for > backwards compatibility and convenience. real Unix users will probably not care, but what do you think the Linux kiddies will think about Python when they find evil-empire- style executables in the build directory? Cheers /F From nas at arctrix.com Sat Feb 3 18:21:24 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Sat, 3 Feb 2001 09:21:24 -0800 Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: <14972.7656.829356.566021@beluga.mojam.com>; from skip@mojam.com on Sat, Feb 03, 2001 at 09:04:08AM -0600 References: <14971.26729.54529.333522@beluga.mojam.com> <14972.7656.829356.566021@beluga.mojam.com> Message-ID: <20010203092124.A30977@glacier.fnational.com> On Sat, Feb 03, 2001 at 09:04:08AM -0600, Skip Montanaro wrote: > Make distclean doesn't remove the pyc's or Emacs backup files. Those > omissions seem to be a bug. Makefile-meister Neal? Yup, its a bug. Here is the story now: clean all object files and compilied .py files clobber everything clean does plus executables, libraries, and tag files distclean: everything clobber does plus makefiles, generated .c files, configure files, Setup files, and lots of other crud that make did not actually generate (core, *~, *.orig, etc). I'm not sure this matches what people expect these targets to do. I expect that "make clean" will be less used now that the makefile usually does the right thing. I removed Makefile.in, Demo/Makefile, Grammar/Makefile.in, Include/Makefile, Lib/Makefile, Misc/Makefile, Modules/Makefile.pre.in, Objects/Makefile.in, Parser/Makefile.in, and Python/Makefile.in as they are no longer used. Neil From tim.one at home.com Sat Feb 3 21:15:31 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 15:15:31 -0500 Subject: [Python-Dev] Case sensitive import In-Reply-To: <14972.10441.479316.919937@anthem.wooz.org> Message-ID: [Barry A. Warsaw] > And thank goodness for Un*x, where I get to ignore all the 69 > different varieties of The One True Operating System -- all Windows > OSes are created equal, right? :) Yes, and every one of them perfect, albeit each in its own unique way . I wouldn't wish it on anyone, but, in the end, even you would have rather done the Win64 port from scratch than try to close the HPUX bugs still open. Heh heh. > Even a Un*x dweeb like myself would agree, if you have to change one > of them, change the executable. I see no reason why on Un*x the build > procedure couldn't drop a symlink from python.exe to python for > backwards compatibility and convenience. Of course I wasn't serious about that. To judge from a decade of Unix-newbie postings to c.l.py, we should rename the executable there to phyton. That's what they think the language is named anyway. always-eager-to-aid-my-unixoid-brethren-ly y'rs - tim From bckfnn at worldonline.dk Sat Feb 3 21:15:38 2001 From: bckfnn at worldonline.dk (Finn Bock) Date: Sat, 03 Feb 2001 20:15:38 GMT Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14972.10746.34425.26722@anthem.wooz.org> References: <14972.10746.34425.26722@anthem.wooz.org> Message-ID: <3a7c66be.37678038@smtp.worldonline.dk> [Barry] >So, let's tease out what the Right solution would be, and then see how >close or if we can get there for 2.1. I've no clue what behavior Mac >and Windows users would /like/ to see -- what would be most natural >for them? OTOH, I like the Un*x behavior and I think I'd want to see >platforms like Cygwin and MacOSX-on-non-HFS+ get as close to that as >possible. > >Is it better to have uniform behavior across all platforms (modulo >places like some Windows network fs's where that may not be possible)? >Should Python's import semantics be identical across all platforms? >OTOH, this is where the rubber meets the road so to speak, so some >incompatibilities may be impossible to avoid. > >And what about Jython? Jython only does a File().exists() (which is similar to a stat()). So on WinNT, jython is behaving wrongly: Jython 2.0 on java1.3.0 (JIT: null) Type "copyright", "credits" or "license" for more information. >>> import stringio >>> stringio.__file__ 'I:\\java\\Jython.CVS\\Lib\\stringio.py' >>> Yet I can't remember any bug reports where this have caused problems. regards, finn From hughett at mercur.uphs.upenn.edu Sat Feb 3 21:40:22 2001 From: hughett at mercur.uphs.upenn.edu (Paul Hughett) Date: Sat, 3 Feb 2001 15:40:22 -0500 Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: <20010203092124.A30977@glacier.fnational.com> (message from Neil Schemenauer on Sat, 3 Feb 2001 09:21:24 -0800) References: <14971.26729.54529.333522@beluga.mojam.com> <14972.7656.829356.566021@beluga.mojam.com> <20010203092124.A30977@glacier.fnational.com> Message-ID: <200102032040.PAA04977@mercur.uphs.upenn.edu> Neil Schemenauer says: > Here is the story now: > clean > all object files and compilied .py files > clobber > everything clean does plus executables, libraries, and > tag files > distclean: > everything clobber does plus makefiles, generated .c > files, configure files, Setup files, and lots of other > crud that make did not actually generate (core, *~, > *.orig, etc). I usually use two or three targets, as follows: clean Delete all the objects, executables, libraries, tag files, etc that are normally generated by make all. Don't touch the Makefile, etc. that are generated by ./configure. This is more or less Neil's clean and clobber taken together; I've never had much need to delete object files but not executables. distclean Delete all the files that didn't come with the distribution tarball; that is, all the files that make clean removes, plus the Makefile, config.cache, etc. However, try not to clobber random files and notes made by the user and not closely related to the package. realclean Delete all the files that could be regenerated from other files, even if they're normally included in the distribution tarball; e.g configure, the PDF file containing the installation instructions, etc. This target is unnecessary in many packages. I'm not going to try to argue that this is the only Right Way(tm), but it has worked well for me, and gives a reasonably clear criterion for deciding which file should get deleted at each level. Paul Hughett From fredrik at pythonware.com Sat Feb 3 21:45:55 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sat, 3 Feb 2001 21:45:55 +0100 Subject: [Python-Dev] Case sensitive import. References: <14972.10746.34425.26722@anthem.wooz.org> <3a7c66be.37678038@smtp.worldonline.dk> Message-ID: <00ba01c08e22$4f48b090$e46940d5@hagrid> finn wrote: > Jython only does a File().exists() (which is similar to a stat()). So on > WinNT, jython is behaving wrongly: > > Jython 2.0 on java1.3.0 (JIT: null) > Type "copyright", "credits" or "license" for more information. > >>> import stringio > >>> stringio.__file__ > 'I:\\java\\Jython.CVS\\Lib\\stringio.py' > >>> > > Yet I can't remember any bug reports where this have caused problems. maybe that because it's easier for a Jython programmer to test his new library under CPython before releasing it to the world, than it is for a CPython programmer on Windows to test his library on a Unix box... yes-i've-been-bitten-by-this--ack-in-the-old-days-ly yrs /F From fredrik at effbot.org Sat Feb 3 21:55:05 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Sat, 3 Feb 2001 21:55:05 +0100 Subject: [Python-Dev] Setup.local is getting zapped References: <14971.26729.54529.333522@beluga.mojam.com> <14972.7656.829356.566021@beluga.mojam.com> <20010203092124.A30977@glacier.fnational.com> <200102032040.PAA04977@mercur.uphs.upenn.edu> Message-ID: <00c401c08e23$96b44510$e46940d5@hagrid> > Neil wrote: > Here is the story now: why not just keep the old behaviour? > clean > all object files and compilied .py files was: remove all junk, such as core files, emacs backup files, patch remains, pyc/pyo files, etc. > clobber > everything clean does plus executables, libraries, and > tag files was: clean plus executables, libraries, object files, and config stuff. use before reconfiguring/rebuilding. > > distclean: > > everything clobber does plus makefiles, generated .c > > files, configure files, Setup files, and lots of other > > crud that make did not actually generate (core, *~, > > *.orig, etc). was: clobber plus everything that shouldn't be in a distribution archive. use before tarring/zipping things up for distribution. from your description, the main difference seems to be that you've moved the "crud" part from "clean" to "distclean"... Cheers /F From tim.one at home.com Sat Feb 3 22:08:08 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 16:08:08 -0500 Subject: [Python-Dev] insertdict slower? In-Reply-To: <3A7C0BE8.A0109F5D@lemburg.com> Message-ID: [MAL] > Looks like Jeremy's machine has a problem or this is the result > of different compiler optimizations. Are you using an AMD chip? They have different cache behavior than the Pentium I expect Jeremy is using. Different flavors of Pentium also have different cache behavior. If the slowdown his box reports in insertdict is real (which I don't know), cache effects are the most likely cause (given that the code has not changed at all). > On my machine using the same compiler and optimization settings > I get the following figure for DictCreation (2.1a1 vs. 2.0): > > DictCreation: 1869.35 ms 12.46 us +8.77% > > That's below noise level (+/-10%). Jeremy saw "about 15%". So maybe that's just *loud* noise . noise-should-be-measured-in-decibels-ly y'rs - tim From tim.one at home.com Sat Feb 3 22:08:18 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 16:08:18 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: <3A7BE7E7.5AA90731@lemburg.com> Message-ID: [MAL] > I'm just trying to sell iterators to bare us the pain of overloading > the for-loop syntax just to get faster iteration over dictionaries. > > The idea is simple: put all the lookup, order and item building > code into the iterator, have many of them, one for each flavour > of values, keys, items and honeyloops, and then optimize the > for-loop/iterator interaction to get the best performance out > of them. > > There's really not much use in adding *one* special case to > for-loops when there are a gazillion different needs to iterate > over data structures, files, socket, ports, coffee cups, etc. They're simply distinct issues to me. Whether people want special syntax for iterating over dicts is (to me) independent of how the iteration protocol works. Dislike of the former should probably be stabbed into Ping's heart . > I know. That's why you would do this: > > lock = [] > # we use the mutable state as lock indicator; initial state is mutable > > # try to acquire lock: > while 1: > prevstate = lock.mutable(0) > if prevstate == 0: > # was already locked > continue > elif prevstate == 1: > # we acquired the lock > break > > # release lock > lock.mutable(1) OK, in the lingo of the field, you're using .mutable(0) as a test-and-clear operation, and building a spin lock on top of it in "the usual" way. In that case the acquire code can be much simpler: while not lock.mutable(0): pass Same thing. I agree then that has basic lock semantics (relying indirectly on the global interpreter lock to make .mutable() calls atomic). But Python-level spin locks are thoroughly impractical: a waiting thread T will use 100% of its timeslice at 100% CPU utilization waiting for the lock, with no chance of succeeding (the global interpreter lock blocks all other threads while T is spinning, so no other thread *can* release the lock for the duration -- the spinning is futile). The performance characteristics would be horrid. So while "a lock", it's not a *useful* lock for threading. You got something against Python's locks ? every-proposal-gets-hijacked-to-some-other-end-ly y'rs - tim From guido at digicool.com Sat Feb 3 22:10:56 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 03 Feb 2001 16:10:56 -0500 Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: Your message of "Sat, 03 Feb 2001 21:55:05 +0100." <00c401c08e23$96b44510$e46940d5@hagrid> References: <14971.26729.54529.333522@beluga.mojam.com> <14972.7656.829356.566021@beluga.mojam.com> <20010203092124.A30977@glacier.fnational.com> <200102032040.PAA04977@mercur.uphs.upenn.edu> <00c401c08e23$96b44510$e46940d5@hagrid> Message-ID: <200102032110.QAA13074@cj20424-a.reston1.va.home.com> > > Neil wrote: > > > Here is the story now: Effbot wrote: > why not just keep the old behaviour? Agreed. Unless there's a GNU guideline somewhere. > > clean > > all object files and compilied .py files > > was: remove all junk, such as core files, emacs backup files, > patch remains, pyc/pyo files, etc. This also always removed the .o files. > > clobber > > everything clean does plus executables, libraries, and > > tag files > > was: clean plus executables, libraries, object files, and config > stuff. use before reconfiguring/rebuilding. > > > > distclean: > > > everything clobber does plus makefiles, generated .c > > > files, configure files, Setup files, and lots of other > > > crud that make did not actually generate (core, *~, > > > *.orig, etc). > > was: clobber plus everything that shouldn't be in a distribution > archive. use before tarring/zipping things up for distribution. > > from your description, the main difference seems to be that you've > moved the "crud" part from "clean" to "distclean"... --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Sat Feb 3 23:24:51 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 17:24:51 -0500 Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14970.60750.570192.452062@beluga.mojam.com> Message-ID: > Fredrik> what's the point? doesn't from-import already do > Fredrik> exactly that on C extensions? [Skip Montanaro] > Consider os. At one point it does "from posix import *". Okay, which > symbols now in its local namespace came from posix and which from its > own devices? It's a lot easier to do > > from posix import __all__ as _all > __all__.extend(_all) > del _all > > than to muck about importing posix, looping over its dict, then > incorporating what it finds. > > It also makes things a bit more consistent for introspective tools. I'm afraid I find it hard to believe people will *keep* C-module __all__ lists in synch with the code as the years go by. If we're going to do this, how about adding code to Py_InitModule4 that sucks the non-underscore names out of its PyMethodDef argument and automagically builds an __all__ attr? Then nothing ever needs to be fiddled by hand for C modules. but-unsure-i-like-__all__-at-all-ly y'rs - tim From fdrake at acm.org Sat Feb 3 23:22:00 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Sat, 3 Feb 2001 17:22:00 -0500 (EST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: References: <14970.60750.570192.452062@beluga.mojam.com> Message-ID: <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> Tim Peters writes: > I'm afraid I find it hard to believe people will *keep* C-module __all__ > lists in synch with the code as the years go by. If we're going to do this, > how about adding code to Py_InitModule4 that sucks the non-underscore names > out of its PyMethodDef argument and automagically builds an __all__ attr? > Then nothing ever needs to be fiddled by hand for C modules. I don't think adding __all__ to C modules makes sense. If you want the equivalent for a module that doesn't have an __all__, you can compute it easily enough. Adding it when it isn't useful is a maintenance problem that can be avoided easily enough. -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From skip at mojam.com Sun Feb 4 00:01:01 2001 From: skip at mojam.com (Skip Montanaro) Date: Sat, 3 Feb 2001 17:01:01 -0600 (CST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: References: <14970.60750.570192.452062@beluga.mojam.com> Message-ID: <14972.36269.845348.280744@beluga.mojam.com> Tim> I'm afraid I find it hard to believe people will *keep* C-module Tim> __all__ lists in synch with the code as the years go by. If we're Tim> going to do this, how about adding code to Py_InitModule4 that Tim> sucks the non-underscore names out of its PyMethodDef argument and Tim> automagically builds an __all__ attr? Then nothing ever needs to Tim> be fiddled by hand for C modules. The way it works now is that the module author inserts a call to _PyModuleCreateAllList at or near the end of the module's init func /* initialize module's __all__ list */ _PyModule_CreateAllList(d); that initializes and populates __all__ based on the keys in the module's dict. Unlike having to manually maintain __all__, I think this solution is fairly un-onerous (one-time change). Again, my assumption is that all non-underscore prefixed symbols in a module's dict will be exported. If this isn't true, the author can simply delete any elements from __all__ after the call to _PyModule_CreateAllList. This functionality can't be subsumed by Py_InitModule4 because the author is allowed to insert values into the module dict after that call (see posixmodule.c for a significant example of this). _PyModule_CreateAllList is currently defined in modsupport.c (not checked in yet) as /* helper function to create __all__ from an extension module's dict */ int _PyModule_CreateAllList(PyObject *d) { PyObject *v, *k, *s; unsigned int i; int res; v = PyList_New(0); if (v == NULL) return -1; res = 0; if (!PyDict_SetItemString(d, "__all__", v)) { k = PyDict_Keys(d); if (k == NULL) res = -1; else { for (i = 0; res == 0 && i < PyObject_Length(k); i++) { s = PySequence_GetItem(k, i); if (s == NULL) res = -1; else { if (PyString_AsString(s)[0] != '_') if (PyList_Append(v, s)) res = -1; Py_DECREF(s); } } } } Py_DECREF(v); return res; } I don't know (nor much care - you guys decide) if it's named with or without a leading underscore. I view it as a more-or-less internal function, but one that many C extension modules will call (guess that might make it not internal). I haven't written a doc blurb for the API manual yet. Skip From skip at mojam.com Sun Feb 4 00:03:20 2001 From: skip at mojam.com (Skip Montanaro) Date: Sat, 3 Feb 2001 17:03:20 -0600 (CST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> References: <14970.60750.570192.452062@beluga.mojam.com> <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> Message-ID: <14972.36408.800070.656541@beluga.mojam.com> Fred> I don't think adding __all__ to C modules makes sense. If you Fred> want the equivalent for a module that doesn't have an __all__, you Fred> can compute it easily enough. Adding it when it isn't useful is a Fred> maintenance problem that can be avoided easily enough. I thought I answered this question already when Fredrik asked it. In os.py, to build its __all__ list based upon the myriad different sets of symbols it might have after it's fancy footwork importing from various os-dependent modules, I think it's easiest to rely on those modules telling os what it should export. Skip From barry at digicool.com Sun Feb 4 00:43:37 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Sat, 3 Feb 2001 18:43:37 -0500 Subject: [Python-Dev] Waaay off topic, but I felt I had to share this... References: <14971.12207.566272.185258@beluga.mojam.com> Message-ID: <14972.38825.231522.939983@anthem.wooz.org> >>>>> "TP" == Tim Peters writes: TP> This inspired me to look at http://www.playboy.com/. A very TP> fancy, media-rich website, that appears to have been coded by TP> hand in Notepad by monkeys -- but monkeys with an inate sense TP> of Pythonic indentation: There goes Tim, browsing the Playboy site just for the JavaScript. Honest. -Barry From thomas at xs4all.net Sun Feb 4 01:42:09 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sun, 4 Feb 2001 01:42:09 +0100 Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14972.36269.845348.280744@beluga.mojam.com>; from skip@mojam.com on Sat, Feb 03, 2001 at 05:01:01PM -0600 References: <14970.60750.570192.452062@beluga.mojam.com> <14972.36269.845348.280744@beluga.mojam.com> Message-ID: <20010204014209.Y962@xs4all.nl> On Sat, Feb 03, 2001 at 05:01:01PM -0600, Skip Montanaro wrote: > Tim> I'm afraid I find it hard to believe people will *keep* C-module > Tim> __all__ lists in synch with the code as the years go by. If we're > Tim> going to do this, how about adding code to Py_InitModule4 that > Tim> sucks the non-underscore names out of its PyMethodDef argument and > Tim> automagically builds an __all__ attr? Then nothing ever needs to > Tim> be fiddled by hand for C modules. > The way it works now is that the module author inserts a call to > _PyModuleCreateAllList at or near the end of the module's init func > /* initialize module's __all__ list */ > _PyModule_CreateAllList(d); Regardless of the use of this __all__ for C modules, this function has the wrong name. If it's intended a real part of the API (and it should be, if you want modules to use it) it shouldn't have a leading underscore. As for the debate on the usefulness, I don't care much either way -- I don't write or maintain that many C modules (exactly 0, in fact :-) and though I see the logic in placing the responsibility with the C module writers, I also know I greatly prefer writing and maintaining Python modules than C modules. Placing the responsibility in the (Python) module doing the 'from .. import *' sounds like a good enough idea to me. I'm also not sure what other examples of its use are out there, other than os.py. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From thomas at xs4all.net Sun Feb 4 01:44:09 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sun, 4 Feb 2001 01:44:09 +0100 Subject: [Python-Dev] Waaay off topic, but I felt I had to share this... In-Reply-To: <14972.38825.231522.939983@anthem.wooz.org>; from barry@digicool.com on Sat, Feb 03, 2001 at 06:43:37PM -0500 References: <14971.12207.566272.185258@beluga.mojam.com> <14972.38825.231522.939983@anthem.wooz.org> Message-ID: <20010204014409.Z962@xs4all.nl> On Sat, Feb 03, 2001 at 06:43:37PM -0500, Barry A. Warsaw wrote: > >>>>> "TP" == Tim Peters writes: > TP> This inspired me to look at http://www.playboy.com/. A very > TP> fancy, media-rich website, that appears to have been coded by > TP> hand in Notepad by monkeys -- but monkeys with an inate sense > TP> of Pythonic indentation: > There goes Tim, browsing the Playboy site just for the JavaScript. Honest. Well, the sickest part is how I read Skip's post, and thought "Oh god, Tim is going to reply to this, I'm sure of it". And I was right :) Lets-see-if-he-gets-the-hidden-meaning-of-*this*-post-ly y'rs, -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From thomas at xs4all.net Sun Feb 4 03:01:13 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sun, 4 Feb 2001 03:01:13 +0100 Subject: [Python-Dev] Nested scopes. Message-ID: <20010204030113.A962@xs4all.nl> So I've been reading python-list and pondering the nested scope issue. I even read the PEP (traded Sleep(tm) for it :). And I'm thinking we can fix the entire nested-scopes-in-combination-with-local-namespace-modifying-stmts issue by doing a last-ditch effort when the codeblock creates a nested scope _and_ uses 'from-import *' or 'exec'. Looking at the noise on python-list I think we should really try to do that. Making 'from foo import *' and 'exec' work in the absense of nested scopes might not be enough, given that a simple 'lambda: 0' statement would suffice to break code again. Here's what I think could work: In absense of 'exec' or 'import*' in a local namespace, compile it as currently. In absense of a nested scope, compile it as 2.0 did, allowing exec and import*. In case both exist, resolve all names local to the nested function as local names, but generate LOAD_PLEASE (or whatever) opcodes that do a top-down search of all parent scopes at runtime. I'm sure it would mean a lot of confusion if people mix 'from foo import *' and a nested scope that intends to use a global, but ends up using a name imported from foo, but I'm also sure it will create a lot less confusion than just breaking a lot of code, for no apparent reason (because that is and will be how people see it.) I also realize implementing the LOAD_PLEASE opcode isn't that straightforward. It requires a pointer from each nested scope to its parent scope (I'm not sure if those exist yet) and it also requires a way to search a function-local namespace (but that should be possible, since that is what LOAD_NAME does.) It would be terribly inefficient (relatively speaking), but so is the use of from-import* in 2.0, so I don't really consider that an issue. The only thing I'm really not sure of is why this hasn't already been done; is there a strong fundamental argument against this aproach other than the (very valid) issue of 'too many features, too little time' ? I still have to grok the nested-scope changes to the compiler and interpreter, so I might have overlooked something. And finally, if this change is going to happen it has to happen before Python 2.1, preferably before 2.1b1. If we ship 2.1-final with the current restrictions, or even the toned-down restrictions of "no import*/exec near nested scopes", it will probably not matter for 2.2, one way or another. Willing-to-write-it-if-given-an-extra-alpha-to-do-it-ly y'rs, -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Sun Feb 4 04:33:48 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 3 Feb 2001 22:33:48 -0500 Subject: [Python-Dev] Waaay off topic, but I felt I had to share this... In-Reply-To: <20010204014409.Z962@xs4all.nl> Message-ID: [Barry A. Warsaw] > There goes Tim, browsing the Playboy site just for the > JavaScript. Honest. Well, it's not like they had many floating-point numbers to ogle! I like 'em best when the high-order mantissa bits are all perky and regular, standing straight up, then go monster insane in the low-order bits, so you can't guess *what* bit might come next! Man, that's hot. Top it off witn an exponent field with lots of ones, and you don't even need any oil. Can't say I've got a preference for sign bits, though -- zero and one can both be saucy treats. Zero is more of a tease, so I guess it depends on the mood. But they didn't have anything like that, just boring old "money doubles", like 29.95. What's up with that? I mean the low-order bits are all like 0x33. Do I have to do *all* the work, while it just *sits* there nagging "3, 3, 3, 3, ..., crank me out forever, big poppa pump, but that's all you're ever gonna get!"? So I settled for the JavaStrip. a-real-man-takes-what-he-can-get-ly y'rs - tim From ping at lfw.org Sun Feb 4 05:30:11 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 3 Feb 2001 20:30:11 -0800 (PST) Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: Message-ID: On Sat, 3 Feb 2001, Tim Peters wrote: > They're simply distinct issues to me. Whether people want special syntax > for iterating over dicts is (to me) independent of how the iteration > protocol works. Dislike of the former should probably be stabbed into > Ping's heart . Ow! Hey. :) We have shorthand like x[k] for spelling x.__getitem__[k]; why not shorthand like 'for k:v in x: ...' for spelling 'iter = x.__iteritems__(); while 1: k, v = iter() ...'? Hmm. What is the issue really with? - the key:value syntax suggested by Guido (i like it quite a lot) - the existence of special __iter*__ methods (seems natural to me; this is how we customize many operators on instances already) - the fact that 'for k:v' checks __iteritems__, __iter__, items, and __getitem__ (it *has* to check all of these things if it's going to play nice with existing mappings and sequences) - or something else? I'm not actually that clear on what the general feeling is about this PEP. Moshe seems to be happy with the first part but not the rest; Tim, do you have a similar position? Eric and Greg both disagreed with Moshe's counter-proposal; does that mean you like the original, or that you would rather do something different altogether? Moshe Zadka wrote: > dict.iteritems() could return not an iterator, but a magical object > whose iterator is the requested iterator. Ditto itervalues(), iterkeys() Seems like too much work to me. I'd rather just have the object produce a straight iterator. (By 'iterator' i mean an ordinary callable object, nothing too magical.) If there are unusual cases where you want to iterate over an object in several different ways i suppose they can create pseudo-sequences in the manner you described, but i think we want to make the most common case (iterating over the object itself) very easy. That is, just implement __iter__ and have it produce a callable. Marc A. Lemburg wrote: > The idea is simple: put all the lookup, order and item building > code into the iterator, have many of them, one for each flavour > of values, keys, items and honeyloops, and then optimize the > for-loop/iterator interaction to get the best performance out > of them. > > There's really not much use in adding *one* special case to > for-loops when there are a gazillion different needs to iterate > over data structures, files, socket, ports, coffee cups, etc. I couldn't tell which way you were trying to argue here. Are you in favour of the general flavour of PEP 234 or did you have in mind something different? Your first paragraph above seems to describe what 234 does. -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From esr at thyrsus.com Sun Feb 4 05:46:50 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sat, 3 Feb 2001 23:46:50 -0500 Subject: [Python-Dev] Re: Sets: elt in dict, lst.include In-Reply-To: ; from ping@lfw.org on Sat, Feb 03, 2001 at 08:30:11PM -0800 References: Message-ID: <20010203234650.A4133@thyrsus.com> Ka-Ping Yee : > I'm not actually that clear on what the general feeling is about > this PEP. Moshe seems to be happy with the first part but not > the rest; Tim, do you have a similar position? Eric and Greg both > disagreed with Moshe's counter-proposal; does that mean you like > the original, or that you would rather do something different > altogether? I haven't yet heard a proposal that I find compelling. And, I have to admit, I've grown somewhat confused about the alternatives on offer. -- Eric S. Raymond Of all tyrannies, a tyranny exercised for the good of its victims may be the most oppressive. It may be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end, for they do so with the approval of their consciences. -- C. S. Lewis From jafo at tummy.com Sun Feb 4 05:50:15 2001 From: jafo at tummy.com (Sean Reifschneider) Date: Sat, 3 Feb 2001 21:50:15 -0700 Subject: [Python-Dev] Re: Python 2.1 alpha 2 released In-Reply-To: <14971.17735.263154.15769@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Fri, Feb 02, 2001 at 06:39:51PM -0500 References: <14971.17735.263154.15769@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010203215015.B29866@tummy.com> On Fri, Feb 02, 2001 at 06:39:51PM -0500, Jeremy Hylton wrote: >The release is currently available from SourceForge and will also be My SRPM is available at: ftp://ftp.tummy.com/pub/tummy/RPMS/SRPMS/ To turn it into a binary RPM for your rpm-based system, run "rpm --rebuild python-2.1a2-1tummy.src.rpm", get a cup of coffee, and then install the resulting binary RPMs (probably under "/usr/src/redhat/RPMS/i386"). Enjoy, Sean -- What no spouse of a programmer can ever understand is that a programmer is working when he's staring out the window. Sean Reifschneider, Inimitably Superfluous tummy.com - Linux Consulting since 1995. Qmail, KRUD, Firewalls, Python From tim.one at home.com Sun Feb 4 07:42:26 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 01:42:26 -0500 Subject: [Python-Dev] RE: [Python-checkins] CVS: python/dist/src/Modules _testmodule.c,NONE,1.1 In-Reply-To: Message-ID: [Jack Jansen] > Is "_test" a good choice of name for this module? It feels a bit > too generic, isn't something like _test_api (or _test_python_c_api) > better? Note that I renamed all this stuff, from _testXXX to _testcapiXXX, but after 2.1a2 was released. better-late-than-early-ly y'rs - tim From tim.one at home.com Sun Feb 4 08:06:21 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 02:06:21 -0500 Subject: [Python-Dev] A word from the author (was "pymalloc", was "fun", was "2.1 slowe r than 2.0") In-Reply-To: <4C99842BC5F6D411A6A000805FBBB199051F5B@ge0057exch01.micro.lucent.com> Message-ID: [Vladimir Marangozov] Hi Vladimir! It's wonderful to see you here again. We had baked a cake for your return, but it's been so long I'm afraid I ate it . Help us out a little more, briefly. The last time you mentioned obmalloc on Python-Dev was: Date: Fri, 8 Sep 2000 18:23:13 +0200 (CEST) Subject: [Python-Dev] 2.0 Optimization & speed > ... > The only reason I've postponed my obmalloc patch is that I > still haven't provided an interface which allows evaluating > it's impact on the mem size consumption. Still a problem in your eyes? In my eyes mem size was something most people would evaluate via their system-specific process monitoring tools, and they wouldn't believe what we said about it anyway <0.9 wink>. Then the last thing mentioned in the patch http://sourceforge.net/patch/?func=detailpatch&patch_id=101104& group_id=5470 was 2000-Aug-12 13:31: > Status set to Postponed. > > Although promising, this hasn't enjoyed much user testing for the > 2.0 time frame (partly because of the lack of an introspective > Python interface which can't be completed in time according to > the release schedule). But at that time it had been tested by more Python-Dev'ers than virtually any other patch in history (yes, I think two may still be the record <0.7 wink>), and nobody else was *asking* for an introspective interface -- they were just timing stuff, and looking at top/wintop/whatever. Now you seem much less hesitant, but still holding back: > Because the risk (long-term) is kind of unknown. I'll testify that the long-term risk of *any* patch is kind of unknown, if that will help. > ... > I'd say, opt-in for 2.1. No risk, enables profiling. Good. > My main reservation is about thread safety from extensions (but > this could be dealt with at a later stage) I expect we'll have to do the dance of evaluating it with and without locks regardless -- we keep pretending that GregS will work on free-threading sometime *this* millennium now . BTW, obmalloc has some competition. Hans Boehm popped up on c.l.py last week, transparently trying to seduce Neil Schemenauer into devoting his life to making the BDW collector make Python's refcounting look like a cheap Dutch trick : http://www.deja.com/getdoc.xp?AN=722453837&fmt=text you-miss-so-much-when-you're-away-ly y'rs - tim From tim.one at home.com Sun Feb 4 09:13:29 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 03:13:29 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <14972.10746.34425.26722@anthem.wooz.org> Message-ID: [Tim] > So a retroactive -1 on this last-second patch -- and a waaaaay > retroactive -1 on Python's behavior on Windows too. [Barry A. Warsaw] > So, let's tease out what the Right solution would be, and then > see how close or if we can get there for 2.1. I've no clue what > behavior Mac and Windows users would /like/ to see -- what would > be most natural for them? Nobody knows -- I don't think "they've" ever been asked. All *developers* want Unix semantics (keep going until finding an exact match -- that's what Steven's patch did). That's not good enough for Windows because of case-destroying network file systems and case-destroying old tools, but that + PYTHONCASEOK (stop on the first match of any kind) is good enough for Windows in my experience. > OTOH, I like the Un*x behavior Of course you do -- you're a developer when you're not a bass player . No developer wants "file" to have 16 distinct potential meanings. > and I think I'd want to see platforms like Cygwin and MacOSX-on- > non-HFS+ get as close to that as possible. Well, MacOSX-on-non-HFS+ *is* Unix, right? So that should take care of itself (ya, right). I don't understand what Cygwin does; here from a Cygwin bash shell session: tim at fluffy ~ $ touch abc tim at fluffy ~ $ touch ABC tim at fluffy ~ $ ls abc tim at fluffy ~ $ wc AbC 0 0 0 AbC tim at fluffy ~ $ ls A* ls: A*: No such file or directory tim at fluffy ~ So best I can tell, they're like Steven: working with a case-insensitive filesystem but trying to make Python insist that it's not, and what basic tools there do about case is seemingly random (wc doesn't care, shell expansion does, touch doesn't, rm doesn't (not shown) -- maybe it's just shell expansion that's trying to pretend this is Unix? oh ya, shell expansion and Python import -- *that's* a natural pair ). > Is it better to have uniform behavior across all platforms (modulo > places like some Windows network fs's where that may not be possible)? I think so, but I've already said that. "import" is a language statement, not a platform file operation at heart. Of *course* people expect open("FiLe") to open files "file" or "FILE" (or even "FiLe" ) on Windows, but inside Python stmts they expect case to matter. > Should Python's import semantics be identical across all platforms? > OTOH, this is where the rubber meets the road so to speak, so some > incompatibilities may be impossible to avoid. I would prefer it, but if Guido thinks Python's import semantics should derive from the platform's filesystem semantics, fine, and then any "Python import should pretend it's Unix" patch should get tossed without further debate. But Guido doesn't think that either, else Windows Python wouldn't complain about "import FILE" finding file.py first (there is no other tool on Windows that cares at all -- everything else would just open file.py). So I view the current rules as inexplicable: they're neither platform-independent nor consistent with the platform's natural behavior (unless that platform has case-sensitive filesystem semantics). Bottom line: for the purpose of import-from-file (and except for case-destroying filesystems, where PYTHONCASEOK is the only hope), we *can* make case-insensitive case-preserving filesystems "act like" they were case-sensitive with modest effort. We can't do the reverse. That would lead to explainable rules and maximal portability. I'll worry about moving all my Python files into a single directory when it comes up (hasn't yet). > And what about Jython? Oh yeah? What about Vyper ? otoh-if-i-actually-cared-about-case-i-would-never-have-adopted- this-silly-sig-style-ly y'rs - tim From vladimir.marangozov at optimay.com Sun Feb 4 15:02:32 2001 From: vladimir.marangozov at optimay.com (Vladimir Marangozov) Date: Sun, 4 Feb 2001 15:02:32 +0100 Subject: [Python-Dev] A word from the author (was "pymalloc", was "fun ", was "2.1 slowe r than 2.0") Message-ID: <4C99842BC5F6D411A6A000805FBBB199051F5D@ge0057exch01.micro.lucent.com> [Tim] > > Help us out a little more, briefly. The last time you > mentioned obmalloc on > Python-Dev was: > > Date: Fri, 8 Sep 2000 18:23:13 +0200 (CEST) > Subject: [Python-Dev] 2.0 Optimization & speed > > ... > > The only reason I've postponed my obmalloc patch is that I > > still haven't provided an interface which allows evaluating > > it's impact on the mem size consumption. > > Still a problem in your eyes? Not really. I think obmalloc is a win w.r.t. both space & speed. I was aiming at evaluating precisely how much we win with the help of the profiler, then tune the allocator even more, but this is OS dependant anyway and most people don't dig so deep. I think they don't need to either, but it's our job to have a good understanding of what's going on. In short, you can go for it, opt-in, without fear. Not opt-out, though, because of custom object's thread safety. Thread safety is a problem. Current extensions implement custom object constructors & destructors safely, because they use (at the end of the macro chain, today) the system allocator which is thread safe. Switching to a thread unsafe allocator by default is risky because this may inject bugs in existing working extensions. Although the core objects won't be affected by this change because of the interpreter lock protection, we have no provisions so far for custom object's thread safety. > > I expect we'll have to do the dance of evaluating it with and > without locks regardless See above -- it's not about speed, it's about safety. > BTW, obmalloc has some competition. Hans Boehm popped up on > c.l.py last week, transparently trying to seduce Neil Schemenauer > into devoting his life to making the BDW collector make Python's > refcounting look like a cheap Dutch trick : > > http://www.deja.com/getdoc.xp?AN=722453837&fmt=text Yes, I saw that. Hans is speaking from experience, but a non-Python one . I can hardly agree with the idea of dropping RC (which is the best strategy towards expliciteness and fine-grain control of the object's life-cycles) and replacing it with some collector beast (whatever its nature). We'll loose control for unknown benefits. We're already dealing with the major pb of RC (cycle garbage) in an elegant way which is complementary to RC. Saying that we're probably dirtying more cache lines than we should in concurrent scenarios is ... an opinion. My opinion is that this is not really our problem . If Hans were really right, Microsoft would have already plugged his collector in Windows, instead of using RC. And we all know that MS is unbeatable in providing efficient, specialized implementations for Windows, tuned for the processors Windows in running on . On a personal note, after 3 months in Munich, I am still intrigued by the many cheap Dutch tricks I see every day on my way, like the latest Mercs, BMWs, Porsches or Audis, to name a few . can't-impress-them-with-my-Ford- 'ly y'rs Vladimir From gvwilson at ca.baltimore.com Sun Feb 4 15:19:47 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Sun, 4 Feb 2001 09:19:47 -0500 Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: <20010204140714.81BBAE8C2@mail.python.org> Message-ID: <000301c08eb5$876baf20$770a0a0a@nevex.com> I've spoken with Barbara Fuller (IPC9 org.); the two openings for a BOF on sets are breakfast or lunch on Wednesday the 7th. I'd prefer breakfast (less chance of me missing my flight :-); is there anyone who's interested in attending who *can't* make that time, but *could* make lunch? And meanwhile: > Ka-Ping Yee: > - the key:value syntax suggested by Guido (i like it quite a lot) Greg Wilson: Did another quick poll; feeling here is that if for key:value in dict: works, then: for index:value in sequence: would also be expected to work. If the keys to the dictionary are (for example) 2-element tuples, then: for (left, right):value in dict: would also be expected to work, just as: for ((left, right), value) in dict.items(): now works. Question: would the current proposal allow NumPy arrays (just as an example) to support both: for index:value in numPyArray: where 'index' would get tuples like '(0, 3, 2)' for a 3D array, *and* for (i, j, k):value in numPyArray: If so, then yeah, it would tidy up a fair bit of my code... Thanks, Greg From thomas at xs4all.net Sun Feb 4 16:10:28 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sun, 4 Feb 2001 16:10:28 +0100 Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: <000301c08eb5$876baf20$770a0a0a@nevex.com>; from gvwilson@ca.baltimore.com on Sun, Feb 04, 2001 at 09:19:47AM -0500 References: <20010204140714.81BBAE8C2@mail.python.org> <000301c08eb5$876baf20$770a0a0a@nevex.com> Message-ID: <20010204161028.D962@xs4all.nl> On Sun, Feb 04, 2001 at 09:19:47AM -0500, Greg Wilson wrote: > If the keys to the dictionary are (for example) 2-element tuples, then: > for (left, right):value in dict: > would also be expected to work, There is no real technical reason for it not to work. From a grammer point of view, for left, right:value in dict: would also work fine. (the grammar would be: 'for' exprlist [':' exprlist] 'in' testlist: and since there can't be a colon inside an exprlist, it's not ambiguous.) The main problem is whether you *want* that to work :) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From fdrake at acm.org Sun Feb 4 17:26:51 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Sun, 4 Feb 2001 11:26:51 -0500 (EST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14972.36408.800070.656541@beluga.mojam.com> References: <14970.60750.570192.452062@beluga.mojam.com> <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> <14972.36408.800070.656541@beluga.mojam.com> Message-ID: <14973.33483.956785.985303@cj42289-a.reston1.va.home.com> Skip Montanaro writes: > I thought I answered this question already when Fredrik asked it. In os.py, You did, and I'd have responded then had I been able to spare the time to reply. (I wasn't ignoring the topic.) > to build its __all__ list based upon the myriad different sets of symbols it > might have after it's fancy footwork importing from various os-dependent > modules, I think it's easiest to rely on those modules telling os what it > should export. But since C extensions inherantly control their exports very tightly, perhaps the right approach is to create the __all__ value in the code that needs it -- it usually won't be needed for C extensions, and the os module is a fairly special case anyway. Perhaps this helper would be a good approach: def _get_exports_list(module): try: return list(module.__all__) except AttributeError: return [n for n in dir(module) if n[0] != '_'] The os module could then use: _OS_EXPORTS = ['path', ...] if 'posix' in _names: ... __all__ = _get_exports_list(posix) del posix elif ...: ... _OS_EXPORTS = ['linesep', ] __all__.extend(_OS_EXPORTS) -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From guido at digicool.com Sun Feb 4 17:55:08 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 11:55:08 -0500 Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: Your message of "Sun, 04 Feb 2001 09:19:47 EST." <000301c08eb5$876baf20$770a0a0a@nevex.com> References: <000301c08eb5$876baf20$770a0a0a@nevex.com> Message-ID: <200102041655.LAA20836@cj20424-a.reston1.va.home.com> > I've spoken with Barbara Fuller (IPC9 org.); the two openings for a > BOF on sets are breakfast or lunch on Wednesday the 7th. I'd prefer > breakfast (less chance of me missing my flight :-); is there anyone > who's interested in attending who *can't* make that time, but *could* > make lunch? Fine with me. > And meanwhile: > > > Ka-Ping Yee: > > - the key:value syntax suggested by Guido (i like it quite a lot) > > Greg Wilson: > Did another quick poll; feeling here is that if > > for key:value in dict: > > works, then: > > for index:value in sequence: > > would also be expected to work. If the keys to the dictionary are (for > example) 2-element tuples, then: > > for (left, right):value in dict: > > would also be expected to work, just as: > > for ((left, right), value) in dict.items(): > > now works. Yes, that's all non-controversial. > Question: would the current proposal allow NumPy arrays (just as an > example) to support both: > > for index:value in numPyArray: > > where 'index' would get tuples like '(0, 3, 2)' for a 3D array, *and* > > for (i, j, k):value in numPyArray: > > If so, then yeah, it would tidy up a fair bit of my code... That's up to the numPy array! Assuming that we introduce this together with iterators, the default NumPy iterator could be made to iterate over all three index sets simultaneously; there could be other iterators to iterate over a selection of index sets (e.g. to iterate over the rows). However the iterator can't be told what form the index has. --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at loewis.home.cs.tu-berlin.de Sun Feb 4 18:43:34 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Sun, 4 Feb 2001 18:43:34 +0100 Subject: [Python-Dev] Re: A word from the author Message-ID: <200102041743.f14HhYE01986@mira.informatik.hu-berlin.de> > Although the core objects won't be affected by this change because > of the interpreter lock protection, we have no provisions so far for > custom object's thread safety. If I understand your concern correctly, you are worried that somebody uses your allocator without holding the interpreter lock. I think it is *extremely* unlikely that a module author will use any Py* function or macro while not holding the lock. I've analyzed a few freely-available extension modules in this respect, and found no occurence of such code. The right way is to document that restriction, both in NEWS and in the C API documentation, and accept the unlikely chance of breaking something. Regards, Martin From esr at thyrsus.com Sun Feb 4 19:20:03 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 13:20:03 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? Message-ID: <20010204132003.A16454@thyrsus.com> Python's .pyc files don't have a magic prefix that the file(1) utility can recognize. Would anyone object if I fixed this? A trivial pair of hacks to the compiler and interpreter would do it. Backward compatibility would be easily arranged. Embedding the Python version number in the prefix might enable some useful behavior down the road. -- Eric S. Raymond The end move in politics is always to pick up a gun. -- R. Buckminster Fuller From fredrik at pythonware.com Sun Feb 4 20:00:48 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sun, 4 Feb 2001 20:00:48 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: <20010204132003.A16454@thyrsus.com> Message-ID: <009701c08edc$ca78fd50$e46940d5@hagrid> eric wrote: > Python's .pyc files don't have a magic prefix that the file(1) utility > can recognize. Would anyone object if I fixed this? A trivial pair of > hacks to the compiler and interpreter would do it. Backward compatibility > would be easily arranged. > > Embedding the Python version number in the prefix might enable some > useful behavior down the road. Python 1.5.2 (#0, May 9 2000, 14:04:03) >>> import imp >>> imp.get_magic() '\231N\015\012' Python 2.0 (#8, Jan 29 2001, 22:28:01) >>> import imp >>> imp.get_magic() '\207\306\015\012' >>> open("some_module.pyc", "rb").read(4) '\207\306\015\012' Python 2.1a1 (#9, Jan 19 2001, 08:41:32) >>> import imp >>> imp.get_magic() '\xdc\xea\r\n' if you want to change the magic, there are a couple things to consider: 1) the header must consist of imp.get_magic() plus a 4-byte timestamp, followed by a marshalled code object 2) the magic should be four bytes. 3) the magic must be different for different bytecode versions 4) the magic shouldn't survive text/binary conversions on platforms which treat text files and binary files diff- erently. Cheers /F From ping at lfw.org Sun Feb 4 20:34:33 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sun, 4 Feb 2001 11:34:33 -0800 (PST) Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <009701c08edc$ca78fd50$e46940d5@hagrid> Message-ID: eric wrote: > Python's .pyc files don't have a magic prefix that the file(1) utility > can recognize. Would anyone object if I fixed this? On Sun, 4 Feb 2001, Fredrik Lundh wrote: > Python 1.5.2 (#0, May 9 2000, 14:04:03) > >>> import imp > >>> imp.get_magic() > '\231N\015\012' I don't understand, Eric. Why won't the existing magic number work? I tried the following and it works fine: 0 string \x99N\x0d Python 1.5.2 compiled bytecode data 0 string \x87\xc6\x0d Python 2.0 compiled bytecode data However, when i add \x0a to the end of the bytecode patterns, this stops working: 0 string \x99N\x0d\x0a Python 1.5.2 compiled bytecode data 0 string \x87\xc6\x0d\x0a Python 2.0 compiled bytecode data Do you know what's going on? These all work fine too, by the way: 0 string #!/usr/bin/env\ python Python program text 0 string #!\ /usr/bin/env\ python Python program text 0 string #!/bin/env\ python Python program text 0 string #!\ /bin/env\ python Python program text 0 string #!/usr/bin/python Python program text 0 string #!\ /usr/bin/python Python program text 0 string #!/usr/local/bin/python Python program text 0 string #!\ /usr/local/bin/python Python program text 0 string """ Python module text Unfortunately, many Python modules are mis-recognized as Java source text because they begin with the word "import". Even more unfortunately, this too-general test for "import" seems to be hard-coded into the file(1) command and cannot be changed by editing /usr/share/magic. -- ?!ng "Old code doesn't die -- it just smells that way." -- Bill Frantz From tim.one at home.com Sun Feb 4 21:19:50 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 15:19:50 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <20010204132003.A16454@thyrsus.com> Message-ID: [Eric S. Raymond] > Python's .pyc files don't have a magic prefix that the file(1) > utility can recognize. Well, they *do* (#define MAGIC in import.c), but it changes from time to time. Here's a NEWS item from 2.1a1: - The interpreter accepts now bytecode files on the command line even if they do not have a .pyc or .pyo extension. On Linux, after executing echo ':pyc:M::\x87\xc6\x0d\x0a::/usr/local/bin/python:' > /proc/sys/fs/binfmt_misc/register any byte code file can be used as an executable (i.e. as an argument to execve(2)). However, the magic number has changed twice since then (in import.c rev 2.157 and again in rev 2.160), so the NEWS item is two changes obsolete. The current magic number can be obtained (as a 4-bytes string) via import imp MAGIC = imp.get_magic() > Would anyone object if I fixed this? Undoubtedly, but not me . Mucking with the .pyc prefix is always contentious. > A trivial pair of hacks to the compiler and interpreter would > do it. Also need to adjust .py files using imp.get_magic(). Backward compatibility would be easily arranged. Embedding > the Python version number in the prefix might enable some useful > behavior down the road. Note that the current scheme uses a 4-byte value, where the last two bytes are fixed, and the first two are (year-1995)*10000 + (month * 100) + day where month and day are 1-based. What it's recording (unsure this is explained anywhere) is the day on which an incompatible change got made to the PVM. This is important to check so that whatever version of Python you're running doesn't try to execute bytecodes generated for an incompatible PVM. But only Python has a chance of understanding this. Note too that the method used for encoding the date runs out of bits at the end of 2001, so the current scheme is on its last legs regardless. couldn't-be-simpler -ly y'rs - tim From guido at digicool.com Sun Feb 4 22:02:22 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 16:02:22 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: Your message of "Sun, 04 Feb 2001 13:20:03 EST." <20010204132003.A16454@thyrsus.com> References: <20010204132003.A16454@thyrsus.com> Message-ID: <200102042102.QAA23574@cj20424-a.reston1.va.home.com> > Python's .pyc files don't have a magic prefix that the file(1) utility > can recognize. Would anyone object if I fixed this? A trivial pair of > hacks to the compiler and interpreter would do it. Backward compatibility > would be easily arranged. I don't understand. The .pyc file has a magic number. Why is this incompatible with file(1)? > Embedding the Python version number in the prefix might enable some > useful behavior down the road. If we're going to redesign the .pyc file header, I'd propose the following: (1) magic number -- for file(1), never to be changed (2) some kind of version -- Python version, or API version, or bytecode version (3) mtime of .py file (4) options, e.g. is this a .pyc or a .pyo (5) size of marshalled code following (6) marshalled code --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Sun Feb 4 22:21:16 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 16:21:16 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <200102042102.QAA23574@cj20424-a.reston1.va.home.com> Message-ID: [Guido] > If we're going to redesign the .pyc file header, I'd propose the > following: > > (1) magic number -- for file(1), never to be changed > > (2) some kind of version -- Python version, or API version, or > bytecode version > > (3) mtime of .py file > > (4) options, e.g. is this a .pyc or a .pyo > > (5) size of marshalled code following > > (6) marshalled code Note that the magic number today is different when -U (Py_UnicodeFlag) is specified. That should be migrated to #4. From esr at thyrsus.com Sun Feb 4 23:16:25 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 17:16:25 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: ; from ping@lfw.org on Sun, Feb 04, 2001 at 11:34:33AM -0800 References: <009701c08edc$ca78fd50$e46940d5@hagrid> Message-ID: <20010204171625.A17315@thyrsus.com> Ka-Ping Yee : > I don't understand, Eric. Why won't the existing magic number work? My error. I looked at a couple of .pyc files, but thought the two-byte magic was actual code instead. Turns out the real problem is that Linux file(1) doesn't recognize this prefix. > I tried the following and it works fine: > > 0 string \x99N\x0d Python 1.5.2 compiled bytecode data > 0 string \x87\xc6\x0d Python 2.0 compiled bytecode data This doesn't work when I append it to /etc/magic. I'm investigating. -- Eric S. Raymond Never trust a man who praises compassion while pointing a gun at you. From esr at thyrsus.com Sun Feb 4 23:24:05 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 17:24:05 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: ; from tim.one@home.com on Sun, Feb 04, 2001 at 03:19:50PM -0500 References: <20010204132003.A16454@thyrsus.com> Message-ID: <20010204172405.C17315@thyrsus.com> Tim Peters : > [Eric S. Raymond] > > Python's .pyc files don't have a magic prefix that the file(1) > > utility can recognize. > > Well, they *do* (#define MAGIC in import.c), but it changes from time to > time. Here's a NEWS item from 2.1a1: > > - The interpreter accepts now bytecode files on the command > line even if they do not have a .pyc or .pyo extension. On > Linux, after executing > > echo ':pyc:M::\x87\xc6\x0d\x0a::/usr/local/bin/python:' > > /proc/sys/fs/binfmt_misc/register > > any byte code file can be used as an executable (i.e. as an > argument to execve(2)). > > However, the magic number has changed twice since then (in import.c rev > 2.157 and again in rev 2.160), so the NEWS item is two changes obsolete. > The current magic number can be obtained (as a 4-bytes string) via > > import imp > MAGIC = imp.get_magic() Interesting. I presume this has to be repeated at every boot? > Note too that the method used for encoding the date runs out of bits at the > end of 2001, so the current scheme is on its last legs regardless. So this has to be fixed anyway. I'm sure we can come up with a better scheme, perhaps one modeled after the PNG header. -- Eric S. Raymond Are we at last brought to such a humiliating and debasing degradation, that we cannot be trusted with arms for our own defence? Where is the difference between having our arms in our own possession and under our own direction, and having them under the management of Congress? If our defence be the *real* object of having those arms, in whose hands can they be trusted with more propriety, or equal safety to us, as in our own hands? -- Patrick Henry, speech of June 9 1788 From fredrik at effbot.org Sun Feb 4 23:34:07 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Sun, 4 Feb 2001 23:34:07 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: Message-ID: <011b01c08efa$9705ecd0$e46940d5@hagrid> tim wrote: > > Would anyone object if I fixed this? > > Undoubtedly, but not me . Mucking with the .pyc prefix is always > contentious. Breaking people's code just for fun seems to be a new trend here. That's bad. From esr at thyrsus.com Sun Feb 4 23:35:59 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 17:35:59 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <200102042102.QAA23574@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sun, Feb 04, 2001 at 04:02:22PM -0500 References: <20010204132003.A16454@thyrsus.com> <200102042102.QAA23574@cj20424-a.reston1.va.home.com> Message-ID: <20010204173559.D17315@thyrsus.com> Guido van Rossum : > I don't understand. The .pyc file has a magic number. Why is this > incompatible with file(1)? It isn't. I failed to spot the fact that this is file(1)'s problem, not Python's; my apologies. Nevertheless, according to Tim Peters this is a good time for the issue to come up, because the present method is going to break after year-end. We might as well redesign it now. > If we're going to redesign the .pyc file header, I'd propose the > following: > > (1) magic number -- for file(1), never to be changed > > (2) some kind of version -- Python version, or API version, or > bytecode version > > (3) mtime of .py file > > (4) options, e.g. is this a .pyc or a .pyo > > (5) size of marshalled code following > > (6) marshalled code I agree with these desiderata. Tim has already pointed out that #4 needs to include a Unicode bit. What I'd like to throw in the pot is the cleverest file signature design I've ever seen -- PNG's. Here's a quote from the PNG spec: ---------------------------------------------------------------------------- The first eight bytes of a PNG file always contain the following values: (decimal) 137 80 78 71 13 10 26 10 (hexadecimal) 89 50 4e 47 0d 0a 1a 0a (ASCII C notation) \211 P N G \r \n \032 \n This signature both identifies the file as a PNG file and provides for immediate detection of common file-transfer problems. The first two bytes distinguish PNG files on systems that expect the first two bytes to identify the file type uniquely. The first byte is chosen as a non-ASCII value to reduce the probability that a text file may be misrecognized as a PNG file; also, it catches bad file transfers that clear bit 7. Bytes two through four name the format. The CR-LF sequence catches bad file transfers that alter newline sequences. The control-Z character stops file display under MS-DOS. The final line feed checks for the inverse of the CR-LF translation problem. A decoder may further verify that the next eight bytes contain an IHDR chunk header with the correct chunk length; this will catch bad transfers that drop or alter null (zero) bytes. ---------------------------------------------------------------------------- I think we ought to model Python's fixed magic-number part on this prefix. -- Eric S. Raymond No matter how one approaches the figures, one is forced to the rather startling conclusion that the use of firearms in crime was very much less when there were no controls of any sort and when anyone, convicted criminal or lunatic, could buy any type of firearm without restriction. Half a century of strict controls on pistols has ended, perversely, with a far greater use of this weapon in crime than ever before. -- Colin Greenwood, in the study "Firearms Control", 1972 From tim.one at home.com Mon Feb 5 00:44:58 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 18:44:58 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <011b01c08efa$9705ecd0$e46940d5@hagrid> Message-ID: [/F] > Breaking people's code just for fun seems to be a new > trend here. That's bad. The details of the current scheme stop working at the end of the year regardless. Would rather change it rationally than in a last-second panic when the first change is needed after December 31st. If you look at the CVS history of import.c, you'll find that the format-- and size --of .pyc magic has already changed several times over the years. There's always "a reason", and there's another one now. The current scheme was designed when Guido thought 2002 was two years after Python's most likely death . From greg at cosc.canterbury.ac.nz Mon Feb 5 00:49:33 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Mon, 05 Feb 2001 12:49:33 +1300 (NZDT) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14972.36269.845348.280744@beluga.mojam.com> Message-ID: <200102042349.MAA03822@s454.cosc.canterbury.ac.nz> Skip Montanaro : > /* initialize module's __all__ list */ > _PyModule_CreateAllList(d); How about constructing __all__ automatically the first time it's referenced if there isn't one already? Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Mon Feb 5 01:07:39 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 19:07:39 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <20010204173559.D17315@thyrsus.com> Message-ID: [Eric S. Raymond] > ... > What I'd like to throw in the pot is the cleverest file signature > design I've ever seen -- PNG's. Here's a quote from the PNG spec: > > ------------------------------------------------------------------ > The first eight bytes of a PNG file always contain the following > values: > > (decimal) 137 80 78 71 13 10 26 10 > (hexadecimal) 89 50 4e 47 0d 0a 1a 0a > (ASCII C notation) \211 P N G \r \n \032 \n Cool! I vote we take it exactly. I don't even know what PNG is, so it's doubtful my Windows box will be confused by decorating Python files the same way . > The first two bytes distinguish PNG files on systems that expect > the first two bytes to identify the file type uniquely. > The first byte is chosen as a non-ASCII value to reduce the > probability that a text file may be misrecognized as a PNG file; also, > it catches bad file transfers that clear bit 7. OK, I suggest (decimal) 143 for Python's first byte. That's a "control code" in Latin-1, and (unlike PNG's 137) not even Windows assigns it to a character in their Latin-1 superset (yet). (decimal) 143 80 89 84 13 10 26 10 (hexadecimal) 8f 50 59 54 0d 0a 1a 0a (ASCII C notation) \217 P Y T \r \n \032 \n From fredrik at effbot.org Mon Feb 5 01:12:09 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Mon, 5 Feb 2001 01:12:09 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: Message-ID: <01ab01c08f08$49f83ed0$e46940d5@hagrid> tim wrote: > [/F] > > Breaking people's code just for fun seems to be a new > > trend here. That's bad. > > The details of the current scheme stop working at the end of the year > regardless. might so be, but it's perfectly possible to change this in a fully backwards compatible way: -- stick to a 4-byte bytecode version magic, but change the algoritm to make it work after 2001. if necessary, use 3 or 4 bytes to hold the "serial number". if the bytecode version is the same as imp.get_magic() and the file isn't damaged, it should be safe to pass it to marshal.load. if marshal returns a code object, it should be safe (relatively speaking) to execute it. -- define the 4-byte timestamp to be an unsigned int, so we can keep going for another 100 years or so. -- introduce a new type code (e.g. 'P') for marshal. this is followed by an extended magic field, followed by the code using today's format (same as for type code 'c'). let the extended magic field contain: -- a python identifier (e.g. "YTHON") -- a newline/eof mangling detector (e.g. "\r\n") -- sys.hexversion (4 bytes) -- a flag field (4 bytes) -- maybe the size of the marshalled block (4 bytes) -- maybe etc Cheers /F From guido at digicool.com Mon Feb 5 01:12:44 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 19:12:44 -0500 Subject: [Python-Dev] import Tkinter fails Message-ID: <200102050012.TAA27410@cj20424-a.reston1.va.home.com> On Unix, either when running from the build directory, or when running the installed binary, "import Tkinter" fails. It seems that Lib/lib-tk is (once again) dropped from the default path. I'm not sure where to point a finger, but I'm kind of hoping that this would be easy for Andrew or Neil to fix... (Also, if this has alrady been addressed here, my apologies. I still have about 500 emails to dig through that arrived during my brief stay in New York...) --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Mon Feb 5 01:34:41 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 19:34:41 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: ; from tim.one@home.com on Sun, Feb 04, 2001 at 07:07:39PM -0500 References: <20010204173559.D17315@thyrsus.com> Message-ID: <20010204193441.A19283@thyrsus.com> Tim Peters : > > The first eight bytes of a PNG file always contain the following > > values: > > > > (decimal) 137 80 78 71 13 10 26 10 > > (hexadecimal) 89 50 4e 47 0d 0a 1a 0a > > (ASCII C notation) \211 P N G \r \n \032 \n > > Cool! I vote we take it exactly. I don't even know what PNG is, so it's > doubtful my Windows box will be confused by decorating Python files the same > way . > > > The first two bytes distinguish PNG files on systems that expect > > the first two bytes to identify the file type uniquely. > > The first byte is chosen as a non-ASCII value to reduce the > > probability that a text file may be misrecognized as a PNG file; also, > > it catches bad file transfers that clear bit 7. > > OK, I suggest (decimal) 143 for Python's first byte. That's a "control > code" in Latin-1, and (unlike PNG's 137) not even Windows assigns it to a > character in their Latin-1 superset (yet). > > (decimal) 143 80 89 84 13 10 26 10 > (hexadecimal) 8f 50 59 54 0d 0a 1a 0a > (ASCII C notation) \217 P Y T \r \n \032 \n \217 is good. It doesn't occur in /usr/share/magic at all, which is a good sign. Why just PYT, though? Why not spell out "Python"? That would let us detect case-smashing, too. -- Eric S. Raymond False is the idea of utility that sacrifices a thousand real advantages for one imaginary or trifling inconvenience; that would take fire from men because it burns, and water because one may drown in it; that has no remedy for evils except destruction. The laws that forbid the carrying of arms are laws of such a nature. They disarm only those who are neither inclined nor determined to commit crimes. -- Cesare Beccaria, as quoted by Thomas Jefferson's Commonplace book From tim.one at home.com Mon Feb 5 02:52:31 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 20:52:31 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <20010204193441.A19283@thyrsus.com> Message-ID: [Eric S. Raymond] > \217 is good. It doesn't occur in /usr/share/magic at all, which > is a good sign. See? You should have more Windows geeks helping out with Linux: none of our ideas have anything in common with yours . > Why just PYT, though? Why not spell out "Python"? Just because 8 seemed geekier than 11. Natural alignment for a struct, etc. > That would let us detect case-smashing, too. Hmm. "Pyt" would too! If you're going to PEP (or virtual PEP) this, I won't raise a stink either way. From ping at lfw.org Mon Feb 5 03:21:40 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sun, 4 Feb 2001 18:21:40 -0800 (PST) Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: Message-ID: On Sun, 4 Feb 2001, Tim Peters wrote: > OK, I suggest (decimal) 143 for Python's first byte. That's a "control > code" in Latin-1, and (unlike PNG's 137) not even Windows assigns it to a > character in their Latin-1 superset (yet). > > (decimal) 143 80 89 84 13 10 26 10 > (hexadecimal) 8f 50 59 54 0d 0a 1a 0a > (ASCII C notation) \217 P Y T \r \n \032 \n Pyt? What's a "pyt"? How about something we can all recognize: (decimal) 143 83 112 97 109 10 13 10 (hexadecimal) 8f 53 70 61 6d 0a 0d 0a (ASCII C notation) \217 S p a m \n \r \n ...to be followed by: date of last incompatible VM change (4 bytes: year, year, month, day) Python version, as in sys.hexversion (4 bytes) mtime of source .py file (4 bytes) reserved for option flags and future expansion (8 bytes) size of marshalled code data (4 bytes) marshalled code That's a nice, geeky 32 bytes of header info. (The "Spam" part is not so serious; the rest is serious. But i do think "Spam" is more fun that "Pyt"! :) And the Ctrl-Z char is pointless; no other binary format does this or needs it.) Hmm. Questions: - Should we include the path to the original .py file? (so Python can automatically recompile an out-of-date file) - How about the name of the module? (so that renaming the file doesn't kill it; possible answer to the case-sensitivity issue?) - If the purpose of the code-size field is to protect against incomplete file transfers, would a hash be worth considering here? -- ?!ng "Old code doesn't die -- it just smells that way." -- Bill Frantz From ping at lfw.org Mon Feb 5 03:34:29 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sun, 4 Feb 2001 18:34:29 -0800 (PST) Subject: [Python-Dev] Suggested .pyc header format In-Reply-To: Message-ID: Here's a quick revision, to fix some alignment boundaries. I think this ordering might make more sense. bytes contents 0-7 magic string '\x8fSpam\n\r\n' 8-11 Python version (sys.hexversion) 12-15 date of last incompatible VM change (YYMD, year msb-first) 16-23 reserved (flags, etc.) 24-27 mtime of source .py file (long int, msb-first) 28-31 size of marshalled code (long int, msb-first) 32- marshalled code In a dump, this would look like: ---------magic--------- --version-- --VM-date-- 8f 53 70 61 6d 0a 0d 0a 02 01 00 a2 07 d1 02 04 .Spam......".Q.. 00 00 00 00 00 00 00 00 3a 7d ae ba 00 00 73 a8 ........:}.:..s( ---------flags--------- ---mtime--- ---size---- -- ?!ng "Old code doesn't die -- it just smells that way." -- Bill Frantz From tim.one at home.com Mon Feb 5 04:41:42 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 22:41:42 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: Message-ID: [Ka-Ping Yee, with more magical ideas] This is contentious every time it comes up because of "backward compatibility". The contentious part is that no two people come into it with the same idea of what "backward compatible" means, exactly, and it usually drags on for days until people realize that. In the meantime, everyone thinks everyone else is an idiot . So far as the docs go, imp.get_magic() returns "a string", and that's all it says. By that defn, it would be darned hard to think of any scheme that isn't backward compatible. OTOH, PyImport_GetMagicNumber() returns "a long", so there's good reason to preserve that version-checking must not rely on more than 4 bytes of info. Then you have /F's post, which purports to give a "fully backward compatible" scheme, despite changing what probably appears to be almost everyting. It takes a long time to reverse-engineer what the crucial invariants are for each person, based on what they propose and what they complain about in other proposals. I don't have time for that now, so will leave it to someone else. It would help if people could spell out directly which invariants they do and don't care about (e.g., you can *infer* that /F cares about exactly 4 bytes magic number (but doesn't care about content) then exactly 4 bytes file timestamp then a blob that marshal believes is a single object then that's it but doesn't care that, e.g., checking the 4-byte magic number alone is sufficent to catch binary files opened in text mode (but somebody else will care about that!)). Since virtually none of this has been formalized via an API, virtually all code outside the distribution that deals with this stuff is cheating. Small wonder it's contentious ... From esr at thyrsus.com Mon Feb 5 04:55:20 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sun, 4 Feb 2001 22:55:20 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: ; from ping@lfw.org on Sun, Feb 04, 2001 at 06:21:40PM -0800 References: Message-ID: <20010204225520.A20513@thyrsus.com> Ka-Ping Yee : > And the Ctrl-Z char > is pointless; no other binary format does this or needs it.) I've actually seen circumstances under which this is useful. Besides, you want a character separating the \n from the \r\n, otherwise ghod knows what interactions you'll get from some of the cockamamie line-terminator translation schemes out there. Might as well be Ctl-Z as anything else. I'll leave the other issues to people with more experience and investment in them. -- Eric S. Raymond When only cops have guns, it's called a "police state". -- Claire Wolfe, "101 Things To Do Until The Revolution" From guido at digicool.com Mon Feb 5 05:10:20 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 23:10:20 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: Your message of "Sun, 04 Feb 2001 22:41:42 EST." References: Message-ID: <200102050410.XAA28600@cj20424-a.reston1.va.home.com> > exactly 4 bytes magic number (but doesn't care about content) > then > exactly 4 bytes file timestamp > then > a blob that marshal believes is a single object > then > that's it That's also what I would call b/w compatible here. It's the obvious baseline. (With the addition that the timestamp uses little-endian byte order -- like marshal.) > but doesn't care that, e.g., checking the 4-byte magic number alone is > sufficent to catch binary files opened in text mode (but somebody else will > care about that!)). Hm, that's not the reason the magic number ends in \r\n. The reason was that on the Mac, long ago, the MPW compiler actually swapped the meaning of \r and \n! So that '\r' in C meant '\012' and '\n' meant '\015'. This was intended to make C programs that were parsing text files looking for \n work on Mac text files which use \r. (Why does the Mac use \r? AFAICT, for the same reason that DOS chose \ instead of / -- to be different from Unix, possibly to avoid patent infringement. Silly.) Later compilers on the Mac weren't so stupid, and now the fact that this lets you discover text translation errors is just a pleasant side-effect. Personally, I don't care about this property any more. > Since virtually none of this has been formalized via an API, virtually all > code outside the distribution that deals with this stuff is cheating. Small > wonder it's contentious ... The thing is, it's very useful to have tools ones that manipulate .pyc files, and while it's not officially documented or standardized, the presence of the C API to get the magic number at least suggests that the file format can change the magic number but not otherwise. The py_compile.py standard library module acts as de-facto documentation. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 5 05:28:30 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 23:28:30 -0500 Subject: [Python-Dev] Waiting method for file objects In-Reply-To: Your message of "Thu, 25 Jan 2001 11:19:36 EST." <20010125111936.A23512@thyrsus.com> References: <20010125111936.A23512@thyrsus.com> Message-ID: <200102050428.XAA28690@cj20424-a.reston1.va.home.com> > I have been researching the question of how to ask a file descriptor how much > data it has waiting for the next sequential read, with a view to discovering > what cross-platform behavior we could count on for a hypothetical `waiting' > method in Python's built-in file class. I have a strong -1 on this. It violates the abstraction of Python file objects as a thin layer on top of C's stdio. I don't want to add any new features that can only be implemented by digging under the hood of stdio. There is no standard way to figure out how much data is buffered inside the FILE struct, so doing any kind of system call on the file descriptor is insufficient unless the file is opened in unbuffered mode -- not an attractive option in most applications. Apart from the stdio buffering issue, apps that really want to do this can already look under the hood, thereby clearly indicating that they make more assumptions about the platform than portable Python. For static files, an app can call os.fstat() itself. But I think it's a weakness of the app if it needs to resort to this -- Eric's example that motivated this desire in him didn't convince me at all. For sockets, and on Unix for pipes and FIFOs, an app can use the select module to find out whether data can be read right away. It doesn't tell how much data, but that's unnecessary -- at least for sockets (where this is a very common request), the recv() call will return short data rather than block for more if at least one byte can be read. (For pipes and FIFOs, you can use fstat() or FIONREAD if you really want -- but why bother?) --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 5 05:41:20 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 23:41:20 -0500 Subject: [Python-Dev] Re: 2.1a2 release issues; mail.python.org still down In-Reply-To: Your message of "Thu, 01 Feb 2001 19:15:24 +0100." <3A79A7BC.58997544@lemburg.com> References: <14969.31398.706875.540775@w221.z064000254.bwi-md.dsl.cnc.net> <3A798F14.D389A4A9@lemburg.com> <14969.38945.771075.55993@cj42289-a.reston1.va.home.com> <3A79A058.772239C2@lemburg.com> <14969.41344.176815.821673@cj42289-a.reston1.va.home.com> <3A79A7BC.58997544@lemburg.com> Message-ID: <200102050441.XAA28783@cj20424-a.reston1.va.home.com> > The warnings are at least as annoying as recompiling the > extensions, even more since each and every imported extension > will moan about the version difference ;-) Hey, here's a suggestion for a solution then: change the warning-issuing code to use the new PyErr_Warn() function! Patch gratefully accepted on SourceForge. Now, note that using "python -Werror" the user can cause these warnings to be turned into errors, and since few modules test for error returns from Py_InitModule(), this will likely cause core dumps. However, note that there are other reasons why Py_InitModule() can return NULL, so it really behooves us to test for an error return anyway! --Guido van Rossum (home page: http://www.python.org/~guido/) From skip at mojam.com Mon Feb 5 05:43:01 2001 From: skip at mojam.com (Skip Montanaro) Date: Sun, 4 Feb 2001 22:43:01 -0600 (CST) Subject: [Python-Dev] import Tkinter fails In-Reply-To: <200102050012.TAA27410@cj20424-a.reston1.va.home.com> References: <200102050012.TAA27410@cj20424-a.reston1.va.home.com> Message-ID: <14974.12117.848610.822769@beluga.mojam.com> Guido> I still have about 500 emails to dig through that arrived during Guido> my brief stay in New York... Haven't you learned yet? Skip From guido at digicool.com Mon Feb 5 05:47:26 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 04 Feb 2001 23:47:26 -0500 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: Your message of "Fri, 02 Feb 2001 11:45:02 +1300." <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> Message-ID: <200102050447.XAA28915@cj20424-a.reston1.va.home.com> > > The old type/class split: list is a type, and types spell their "method > > tables" in ways that have little in common with how classes do it. > > Maybe as a first step towards type/class unification one > day, we could add __xxx__ attributes to all the builtin > types, and start to think of the method table as the > definitive source of all methods, with the tp_xxx slots > being a sort of cache for the most commonly used ones. Yes, I've often thought that we should be able to heal the split for 95% by using a few well-aimed tricks like this. Later... --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Mon Feb 5 05:58:28 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 4 Feb 2001 23:58:28 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <200102050410.XAA28600@cj20424-a.reston1.va.home.com> Message-ID: [Guido] > Hm, that's not the reason the magic number ends in \r\n. > ... [Mac silliness, for a change] ... > Later compilers on the Mac weren't so stupid, and now the fact that > this lets you discover text translation errors is just a pleasant > side-effect. > > Personally, I don't care about this property any more. Don't know about Macs (although I believe the Metrowerks libc can be still be *configured* to swap \r and \n there), but it caught a bug in Python in the 2.0 release cycle (where Python was opening .pyc files in text mode by mistake, but only on Windows). Well, actually, it didn't catch anything, it caused import from .pyc to fail silently. Having *some* specific gross thing fail every time is worth something. But the \r\n thingie can be pushed into the extended header instead. Here's an idea for "the new" magic number, assuming it must remain 4 bytes: byte 0: \217 will never change byte 1: 'P' will never change byte 2: high-order byte of version number byte 3: low-order byte of version number "Version number" is an unsigned 16-bit int, starting at 0 and incremented by 1 from time to time. 64K changes may even be enough to get us to Python 3000 . A separate text file should record the history of version number changes, associating each with the date, release and reason for change (the CVS log for import.c used to be good about recording the reason, but not anymore). Then we can keep a 4-byte magic number, Eric can have his invariant two-byte tag at the start, and it's still possible to compare "version numbers" easily for more than just equality (read the magic number as a "network standard" unsigned int, and it's a total ordering with earlier versions comparing less than later ones). The other nifty PNG sanity-checking tricks can also move into the extended header. all-obvious-to-the-most-casual-observer-ly y'rs - tim From guido at digicool.com Mon Feb 5 06:04:56 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 00:04:56 -0500 Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: Your message of "Sat, 03 Feb 2001 17:03:20 CST." <14972.36408.800070.656541@beluga.mojam.com> References: <14970.60750.570192.452062@beluga.mojam.com> <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> <14972.36408.800070.656541@beluga.mojam.com> Message-ID: <200102050504.AAA29344@cj20424-a.reston1.va.home.com> > Fred> I don't think adding __all__ to C modules makes sense. If you > Fred> want the equivalent for a module that doesn't have an __all__, you > Fred> can compute it easily enough. Adding it when it isn't useful is a > Fred> maintenance problem that can be avoided easily enough. > > I thought I answered this question already when Fredrik asked it. In os.py, > to build its __all__ list based upon the myriad different sets of symbols it > might have after it's fancy footwork importing from various os-dependent > modules, I think it's easiest to rely on those modules telling os what it > should export. So use dir(), or dir(posix), to find out what you've got. I'm strongly -1 to adding __all__ to extensions. Typically, *all* symbols exported by an extension are to be imported. We should never rely on __all__ existing -- we should just test for its existence and have a fallback, just like from...import * does. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Mon Feb 5 06:12:44 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 00:12:44 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: Message-ID: [Ping] > - If the purpose of the code-size field is to protect against > incomplete file transfers, would a hash be worth > considering here? I think it's more to make it easy to suck the code into a string in one gulp. Else the code-size field would have come after the code <0.9 wink>. From fredrik at effbot.org Mon Feb 5 07:35:02 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Mon, 5 Feb 2001 07:35:02 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: Message-ID: <009f01c08f3d$c7034070$e46940d5@hagrid> tim wrote: > Then you have /F's post, which purports to give a "fully backward > compatible" scheme, despite changing what probably appears to be > almost everyting. unlike earlier proposals, it doesn't break py_compile: MAGIC = imp.get_magic() fc = open(cfile, 'wb') fc.write('\0\0\0\0') wr_long(fc, timestamp) marshal.dump(codeobject, fc) fc.flush() fc.seek(0, 0) fc.write(MAGIC) fc.close() and it doesn't break imputil: f = open(file, 'rb') if f.read(4) == imp.get_magic(): t = struct.unpack(' Message-ID: [/F] > unlike earlier proposals, it doesn't break py_compile: > ... > and it doesn't break imputil: > ... I don't care about those, not because they're unimportant, but because they're in the distribution so we're responsible for shipping versions that work. They're "inside the box", where nothing is cheating. > and it doesn't break any user code that does similar things > (squeeze, pythonworks, and a dozen other tools I've written; > applications using local copies of imputils, etc) *Those* I care about. But it's impossible to know all the assumptions they make, given that almost nothing is guaranteed by the docs (the only meaningful definition I can think of for your "similar" is "other code that won't break"!). For all I know, ActivePython will die unless they can divide the magic number by 10000 then add 1995 to get the year <0.7 wink/0.3 frown>. Anyway, I'm on board with that, and already proposed a new 4-byte "magic number" format that should leave you and Eric happy. Me too. Probably not Guido. Barry is ignoring this. Jeremy wishes he had the time. Fred hopes we don't change the docs. Eric just wants to see progress. Ping is thinking of new syntax for a .pyc iterator . From pf at artcom-gmbh.de Mon Feb 5 11:30:20 2001 From: pf at artcom-gmbh.de (Peter Funk) Date: Mon, 5 Feb 2001 11:30:20 +0100 (MET) Subject: "backward compatibility" defined (was Re: [Python-Dev] Identifying magic prefix on Python files?) In-Reply-To: from Tim Peters at "Feb 4, 2001 10:41:42 pm" Message-ID: Hi, Tim Peters wrote: > This is contentious every time it comes up because of "backward > compatibility". The contentious part is that no two people come into it > with the same idea of what "backward compatible" means, exactly, and it > usually drags on for days until people realize that. In the meantime, > everyone thinks everyone else is an idiot . Thinking as a commercial software vendor: "Backward compatibility" means to me, that I can choose a stable version of Python (say 1.5.2, since this is what comes with the Linux Distros SuSE 6.2, 6.3, 6.4 and 7.0 or RedHat 6.2, 7.0 is still in use on 98% of our customer machines), generate .pyc-Files with this and than future stable versions of Python will be able to import and run these files, if I payed proper attention to possible incompatibilities like for example '[].append((one, two))'. Otherwise the vendor company has to fall back to one of the following "solutions": 1. provide a bunch of different versions of bytecode-Archives for each version of Python (a nightmare). or 2. has to distribute the Python sources of its application (which is impossible due to the companies policy) or 3. has to distribute an own version of Python (which is a similar nightmare due to incompatible shared library versions (Tcl/Tk 8.0.5, 8.1, ... 8.3) and the risk to break other Python and Tcl/Tk apps installed by the Linux Distro). or 4. has to port the stuff to another language platform (say Java?) not suffering from such binary incompatibility problems. (do u believe this?) So in the closed-source-world bytecode compatibility is a major issue. Regards, Peter -- Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260 office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen) From mal at lemburg.com Mon Feb 5 11:47:47 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 11:47:47 +0100 Subject: [Python-Dev] insertdict slower? References: Message-ID: <3A7E84D3.4D111F0F@lemburg.com> Tim Peters wrote: > > [MAL] > > Looks like Jeremy's machine has a problem or this is the result > > of different compiler optimizations. > > Are you using an AMD chip? They have different cache behavior than the > Pentium I expect Jeremy is using. Different flavors of Pentium also have > different cache behavior. If the slowdown his box reports in insertdict is > real (which I don't know), cache effects are the most likely cause (given > that the code has not changed at all). Yes, I ran the tests on an AMK K6 233. Don't know about the internal cache size or their specific cache strategy, but since much of today's performance is achieved via cache strategies, this would be a possible explanation. > > On my machine using the same compiler and optimization settings > > I get the following figure for DictCreation (2.1a1 vs. 2.0): > > > > DictCreation: 1869.35 ms 12.46 us +8.77% > > > > That's below noise level (+/-10%). > > Jeremy saw "about 15%". So maybe that's just *loud* noise . > > noise-should-be-measured-in-decibels-ly y'rs - tim Hmm, that would introduce a logarithmic scale to these benchmarks ... perhaps not a bad idea :-) BTW, I've added a special test for string key and float keys to the benchmark. The results are surprising: string keys are 100% faster than float keys. Part of this is certainly due to the string key optimizations. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Mon Feb 5 12:01:50 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 12:01:50 +0100 Subject: [Python-Dev] Adding opt-in pymalloc + alpha3 References: <4C99842BC5F6D411A6A000805FBBB199051F5D@ge0057exch01.micro.lucent.com> Message-ID: <3A7E881E.64D64F08@lemburg.com> Vladimir Marangozov wrote: > > [Tim] > > > > Help us out a little more, briefly. The last time you > > mentioned obmalloc on > > Python-Dev was: > > > > Date: Fri, 8 Sep 2000 18:23:13 +0200 (CEST) > > Subject: [Python-Dev] 2.0 Optimization & speed > > > ... > > > The only reason I've postponed my obmalloc patch is that I > > > still haven't provided an interface which allows evaluating > > > it's impact on the mem size consumption. > > > > Still a problem in your eyes? > > Not really. I think obmalloc is a win w.r.t. both space & speed. > I was aiming at evaluating precisely how much we win with the help > of the profiler, then tune the allocator even more, but this is > OS dependant anyway and most people don't dig so deep. I think > they don't need to either, but it's our job to have a good > understanding of what's going on. > > In short, you can go for it, opt-in, without fear. > > Not opt-out, though, because of custom object's thread safety. > > Thread safety is a problem. Current extensions implement custom > object constructors & destructors safely, because they use (at the > end of the macro chain, today) the system allocator which is > thread safe. Switching to a thread unsafe allocator by default is > risky because this may inject bugs in existing working extensions. > Although the core objects won't be affected by this change because > of the interpreter lock protection, we have no provisions so far > for custom object's thread safety. Ok, everyone seems to agree that adding pymalloc to Python on an opt-in basis is a Good Thing, so let's do it ! Even though I don't think that adding opt-in code matters much w/r to stability of the rest of the code, I still think that we ought to insert a third alpha release to hammer a bit more on weak refs and nested scopes. These two additions are major new features in Python 2.1 which were added very late in the release cycle and haven't had much testing in the field. Thoughts ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Mon Feb 5 12:08:41 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 12:08:41 +0100 Subject: [Python-Dev] re: Sets BOF / for in dict References: <000301c08eb5$876baf20$770a0a0a@nevex.com> Message-ID: <3A7E89B9.B90D36DF@lemburg.com> Greg Wilson wrote: > > I've spoken with Barbara Fuller (IPC9 org.); the two openings for a > BOF on sets are breakfast or lunch on Wednesday the 7th. I'd prefer > breakfast (less chance of me missing my flight :-); is there anyone > who's interested in attending who *can't* make that time, but *could* > make lunch? Depends on the time frame of "breakfast" ;-) > And meanwhile: > > > Ka-Ping Yee: > > - the key:value syntax suggested by Guido (i like it quite a lot) > > Greg Wilson: > Did another quick poll; feeling here is that if > > for key:value in dict: > > works, then: > > for index:value in sequence: > > would also be expected to work. If the keys to the dictionary are (for > example) 2-element tuples, then: > > for (left, right):value in dict: > > would also be expected to work, just as: > > for ((left, right), value) in dict.items(): > > now works. > > Question: would the current proposal allow NumPy arrays (just as an > example) to support both: > > for index:value in numPyArray: > > where 'index' would get tuples like '(0, 3, 2)' for a 3D array, *and* > > for (i, j, k):value in numPyArray: > > If so, then yeah, it would tidy up a fair bit of my code... Two things: 1. the proposed syntax key:value does away with the easy to parse Python block statement syntax 2. why can't we use the old 'for x,y,z in something:' syntax and instead add iterators to the objects in question ? for key, value in object.iterator(): ... this doesn't only look better, it also allows having different iterators for different tasks (e.g. to iterate over values, key, items, row in a matrix, etc.) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Mon Feb 5 12:15:03 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 12:15:03 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: <20010204132003.A16454@thyrsus.com> <009701c08edc$ca78fd50$e46940d5@hagrid> Message-ID: <3A7E8B37.E855DF81@lemburg.com> Fredrik Lundh wrote: > > eric wrote: > > > Python's .pyc files don't have a magic prefix that the file(1) utility > > can recognize. Would anyone object if I fixed this? A trivial pair of > > hacks to the compiler and interpreter would do it. Backward compatibility > > would be easily arranged. > > > > Embedding the Python version number in the prefix might enable some > > useful behavior down the road. > > Python 1.5.2 (#0, May 9 2000, 14:04:03) > >>> import imp > >>> imp.get_magic() > '\231N\015\012' > > Python 2.0 (#8, Jan 29 2001, 22:28:01) > >>> import imp > >>> imp.get_magic() > '\207\306\015\012' > >>> open("some_module.pyc", "rb").read(4) > '\207\306\015\012' > > Python 2.1a1 (#9, Jan 19 2001, 08:41:32) > >>> import imp > >>> imp.get_magic() > '\xdc\xea\r\n' > > if you want to change the magic, there are a couple > things to consider: > > 1) the header must consist of imp.get_magic() plus > a 4-byte timestamp, followed by a marshalled code > object > > 2) the magic should be four bytes. > > 3) the magic must be different for different bytecode > versions > > 4) the magic shouldn't survive text/binary conversions > on platforms which treat text files and binary files diff- > erently. Side note: the magic can also change due to command line options being used, e.g. -U will bump the magic number by 1. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From skip at mojam.com Mon Feb 5 13:34:14 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 06:34:14 -0600 (CST) Subject: [Python-Dev] ANNOUNCE: Python for AS/400. (fwd) Message-ID: <14974.40390.663230.906178@beluga.mojam.com> FYI. Note that the author's web page for the project identifies some ASCII/EBCDIC issues. Don't know if that would be of interest to this group or not... Skip -------------- next part -------------- An embedded message was scrubbed... From: Per Gummedal Subject: ANNOUNCE: Python for AS/400. Date: Mon, 5 Feb 2001 09:01:00 +0100 Size: 1206 URL: From tismer at tismer.com Mon Feb 5 15:13:18 2001 From: tismer at tismer.com (Christian Tismer) Date: Mon, 05 Feb 2001 15:13:18 +0100 Subject: [Python-Dev] The 2nd Korea Python Users Seminar References: <200101311626.LAA01799@cj20424-a.reston1.va.home.com> Message-ID: <3A7EB4FE.2791A6D1@tismer.com> Guido van Rossum wrote: > > Wow...! > > Way to go, Christian! I did so. Now I'm back, and I have to say it was phantastic. People in Korea are very nice, and the Python User Group consists of very enthusiastic Pythoneers. There were over 700 participants for the seminar, and they didn't have enough chairs for everybody. Changjune did a very well-done presentation for beginners. I was merged into it for special details, future plans, and the Q&A part. It was a lesson for me, to see how to present difficult stuff. Korea is a very prolific ground for Python. Only few outside of Korea know about this. I suggested to open up the group for non-local actions, and they are planning to add an international HTML tree to their website. Professor Lee just got the first print of "Learning Python" which he translated into Korean. We promised each other to exchange our translation. And so on, lots of new friendships. I will come back in autumn for the next seminar. Today I started a Hangul course, after Chanjune tought be the principles of the phonetic syllables. Nice language! ciao - chris.or.kr -- Christian Tismer :^) Mission Impossible 5oftware : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF where do you want to jump today? http://www.stackless.com From alex_c at MIT.EDU Mon Feb 5 15:30:33 2001 From: alex_c at MIT.EDU (Alex Coventry) Date: Mon, 5 Feb 2001 09:30:33 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? Message-ID: <200102051430.JAA17890@w20-575-36.mit.edu> Hi. I've found it convenient to use the function below to make system calls, as I sometimes the strings I need to pass as arguments confuse the shell used in os.system. I was wondering whether it's worth passing this upstream. The main problem with doing so is that I have no idea how to implement it on Windows, as I can't use the os.fork and os.wait* functions in that context. Alex. import os def system(command, args, environ=os.environ): '''The 'args' variable is a sequence of strings that are to be passed as the arguments to the command 'command'.''' # Fork off a process to be replaced by the command to be executed # when 'execve' is run. pid = os.fork() if pid == 0: # This is the child process; replace it. os.execvpe(command, [command,] + args, environ) # In the parent process; wait for the child process to finish. return_pid, return_value = os.waitpid(pid, 0) assert return_pid == pid return return_value if __name__ == '__main__': print system('/bin/cat', ['/etc/hosts.allow', '/etc/passwd']) From guido at digicool.com Mon Feb 5 15:34:51 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 09:34:51 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? In-Reply-To: Your message of "Mon, 05 Feb 2001 09:30:33 EST." <200102051430.JAA17890@w20-575-36.mit.edu> References: <200102051430.JAA17890@w20-575-36.mit.edu> Message-ID: <200102051434.JAA31491@cj20424-a.reston1.va.home.com> > Hi. I've found it convenient to use the function below to make system > calls, as I sometimes the strings I need to pass as arguments confuse > the shell used in os.system. I was wondering whether it's worth passing > this upstream. The main problem with doing so is that I have no idea > how to implement it on Windows, as I can't use the os.fork and os.wait* > functions in that context. > > Alex. Hi Alex, This functionality is alrady available through the os.spawn*() family of functions. This is supported on Unix and Windows. BTW, what do you mean by "upstream"? --Guido van Rossum (home page: http://www.python.org/~guido/) > import os > > def system(command, args, environ=os.environ): > > '''The 'args' variable is a sequence of strings that are to be > passed as the arguments to the command 'command'.''' > > # Fork off a process to be replaced by the command to be executed > # when 'execve' is run. > pid = os.fork() > if pid == 0: > > # This is the child process; replace it. > os.execvpe(command, [command,] + args, environ) > > # In the parent process; wait for the child process to finish. > return_pid, return_value = os.waitpid(pid, 0) > assert return_pid == pid > return return_value > > if __name__ == '__main__': > > print system('/bin/cat', ['/etc/hosts.allow', '/etc/passwd']) From fredrik at pythonware.com Mon Feb 5 15:42:51 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Mon, 5 Feb 2001 15:42:51 +0100 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? References: <200102051430.JAA17890@w20-575-36.mit.edu> <200102051434.JAA31491@cj20424-a.reston1.va.home.com> Message-ID: <01d001c08f81$ec4d83b0$0900a8c0@SPIFF> guido wrote: > BTW, what do you mean by "upstream"? looks like freebsd lingo: the original maintainer of a piece of software (outside the bsd universe). Cheers /F From mwh21 at cam.ac.uk Mon Feb 5 15:54:30 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 05 Feb 2001 14:54:30 +0000 Subject: [Python-Dev] Re: "backward compatibility" defined In-Reply-To: pf@artcom-gmbh.de's message of "Mon, 5 Feb 2001 11:30:20 +0100 (MET)" References: Message-ID: pf at artcom-gmbh.de (Peter Funk) writes: > Hi, > > Tim Peters wrote: > > This is contentious every time it comes up because of "backward > > compatibility". The contentious part is that no two people come into it > > with the same idea of what "backward compatible" means, exactly, and it > > usually drags on for days until people realize that. In the meantime, > > everyone thinks everyone else is an idiot . > > Thinking as a commercial software vendor: "Backward compatibility" > means to me, that I can choose a stable version of Python (say 1.5.2, > since this is what comes with the Linux Distros SuSE 6.2, 6.3, 6.4 > and 7.0 or RedHat 6.2, 7.0 is still in use on 98% of our customer > machines), generate .pyc-Files with this and than future stable > versions of Python will be able to import and run these files, if I > payed proper attention to possible incompatibilities like for > example '[].append((one, two))'. Really? This isn't the case today, is it? The demise of UNPACK_LIST/UNPACK_TUPLE springs to mind. Changes in IMPORT_* opcodes/code-generation probably bite too. I can certainly remember occasions in the past few months where I'be updated from CVS, rebuilt and forgotten to blow the .pyc files away and got core dumps as a result. > Otherwise the vendor company has to fall back to one of the following > "solutions": > 1. provide a bunch of different versions of bytecode-Archives for each > version of Python (a nightmare). Oh, hardly. I can see that making sure that people get the right versions might be a drag, but not a severe one. You could always distribute *all* the relavent bytecodes - they're not that big. > or 2. has to distribute the Python sources of its application (which is > impossible due to the companies policy) decompyle? This isn't going to protect you against anyone with a modicum of determination. > or 3. has to distribute an own version of Python (which is a similar > nightmare due to incompatible shared library versions (Tcl/Tk > 8.0.5, 8.1, ... 8.3) and the risk to break other Python and > Tcl/Tk apps installed by the Linux Distro). I don't believe this can be unsurmountable. Build a static executable. > So in the closed-source-world bytecode compatibility is a major issue. Well, they seem to cope without it at the moment... Cheers, M. -- The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offence. -- Edsger W. Dijkstra, SIGPLAN Notices, Volume 17, Number 5 From alex_c at MIT.EDU Mon Feb 5 15:57:03 2001 From: alex_c at MIT.EDU (Alex Coventry) Date: Mon, 5 Feb 2001 09:57:03 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? In-Reply-To: <200102051434.JAA31491@cj20424-a.reston1.va.home.com> (message from Guido van Rossum on Mon, 05 Feb 2001 09:34:51 -0500) References: <200102051430.JAA17890@w20-575-36.mit.edu> <200102051434.JAA31491@cj20424-a.reston1.va.home.com> Message-ID: <200102051457.JAA17949@w20-575-36.mit.edu> > This functionality is alrady available through the os.spawn*() family > of functions. This is supported on Unix and Windows. Hi, Guido. The only problem with os.spawn* is that it forks off a new process, and I don't know how to wait for the new process to finish. > BTW, what do you mean by "upstream"? I thought it might be a useful thing to include in the python distribution. Alex. From guido at digicool.com Mon Feb 5 15:55:51 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 09:55:51 -0500 Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: Your message of "Mon, 05 Feb 2001 12:08:41 +0100." <3A7E89B9.B90D36DF@lemburg.com> References: <000301c08eb5$876baf20$770a0a0a@nevex.com> <3A7E89B9.B90D36DF@lemburg.com> Message-ID: <200102051455.JAA31737@cj20424-a.reston1.va.home.com> > Greg Wilson wrote: > > > > I've spoken with Barbara Fuller (IPC9 org.); the two openings for a > > BOF on sets are breakfast or lunch on Wednesday the 7th. I'd prefer > > breakfast (less chance of me missing my flight :-); is there anyone > > who's interested in attending who *can't* make that time, but *could* > > make lunch? [MAL] > Depends on the time frame of "breakfast" ;-) Does this mean you'll be at the conference? That would be excellent! > Two things: > > 1. the proposed syntax key:value does away with the > easy to parse Python block statement syntax > > 2. why can't we use the old 'for x,y,z in something:' syntax > and instead add iterators to the objects in question ? > > for key, value in object.iterator(): > ... > > this doesn't only look better, it also allows having different > iterators for different tasks (e.g. to iterate over values, key, > items, row in a matrix, etc.) This should become the PEP. I propose that we try to keep this discussion off python-dev, and that the PEP author(s?) set up a separate discussion list (e.g. at egroups) to keep the PEP feedback coming. I promise I'll subscribe to such a list. --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Mon Feb 5 16:01:28 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Mon, 5 Feb 2001 10:01:28 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? In-Reply-To: <01d001c08f81$ec4d83b0$0900a8c0@SPIFF>; from fredrik@pythonware.com on Mon, Feb 05, 2001 at 03:42:51PM +0100 References: <200102051430.JAA17890@w20-575-36.mit.edu> <200102051434.JAA31491@cj20424-a.reston1.va.home.com> <01d001c08f81$ec4d83b0$0900a8c0@SPIFF> Message-ID: <20010205100128.A23746@thyrsus.com> Fredrik Lundh : > guido wrote: > > BTW, what do you mean by "upstream"? > > looks like freebsd lingo: the original maintainer of a > piece of software (outside the bsd universe). Debian lingo, too. Hmm...maybe this needs to go into the Jargon File. Yes, it does. I just added: @hd{upstream} @g{adj.} @p{} [common] Towards the original author(s) or maintainer(s) of a project. Used in connection with software that is distributed both in its original source form and in derived, adapted versions through a distribution like Debian Linux or one of the BSD ports that has component maintainers for each of their parts. When a component maintainer receives a bug report or patch, he may choose to retain the patch as a porting tweak to the distribution's derivative of the project, or to pass it upstream to the project's maintainer. The antonym @d{downstream} is rare. @comment ESR (seen on the Debian and Python lists) -- Eric S. Raymond You [should] not examine legislation in the light of the benefits it will convey if properly administered, but in the light of the wrongs it would do and the harm it would cause if improperly administered -- Lyndon Johnson, former President of the U.S. From nas at arctrix.com Mon Feb 5 16:02:22 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 5 Feb 2001 07:02:22 -0800 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: <200102050447.XAA28915@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sun, Feb 04, 2001 at 11:47:26PM -0500 References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> Message-ID: <20010205070222.A5287@glacier.fnational.com> On Sun, Feb 04, 2001 at 11:47:26PM -0500, Guido van Rossum wrote: > Yes, I've often thought that we should be able to heal the split for > 95% by using a few well-aimed tricks like this. Later... I was playing around this weekend with the class/type problem. Without too much effort I had an interpreter that could to things like this: >>> class MyInt(type(1)): ... pass ... >>> i = MyInt(10) >>> i 10 >>> i + 1 11 The major changes were allowing PyClassObject to subclass types (ie. changing PyClass_Check(op) to (PyClass_Check(op) || PyType_Check(op))), writing a _PyType_Lookup function, and making class_lookup use it. The experiment has convinced me that we can allow subclasses of types quite easily without major changes. It has also given me some ideas on "the right way" to solve this problem. The rough scheme I can up yesterday goes like this: PyObject { int ob_refcnt; PyClass ob_class; } PyClass { PyObject_HEAD char *cl_name; getattrfunc cl_getattr; PyMethodTable *cl_methods; } PyMethodTable { binaryfunc nb_add; binaryfunc nb_sub; ... } When calling a method on a object the interpreter would first check for a direct method and if that does not exist then call cl_getattr. Obviously there are still a few details to be worked out. :-) Neil From guido at digicool.com Mon Feb 5 16:04:07 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 10:04:07 -0500 Subject: "backward compatibility" defined (was Re: [Python-Dev] Identifying magic prefix on Python files?) In-Reply-To: Your message of "Mon, 05 Feb 2001 11:30:20 +0100." References: Message-ID: <200102051504.KAA31805@cj20424-a.reston1.va.home.com> > Thinking as a commercial software vendor: "Backward compatibility" > means to me, that I can choose a stable version of Python (say 1.5.2, > since this is what comes with the Linux Distros SuSE 6.2, 6.3, 6.4 > and 7.0 or RedHat 6.2, 7.0 is still in use on 98% of our customer > machines), generate .pyc-Files with this and than future stable > versions of Python will be able to import and run these files, if I > payed proper attention to possible incompatibilities like for > example '[].append((one, two))'. Alas, for technical reasons, bytecode generated by different Python versions is *not* binary compatible. > Otherwise the vendor company has to fall back to one of the following > "solutions": > 1. provide a bunch of different versions of bytecode-Archives for each > version of Python (a nightmare). > or 2. has to distribute the Python sources of its application (which is > impossible due to the companies policy) Remember that Python is an Open Source language. I assume that you are talking about your company. So I understand that this company doesn't underwrite the Open Source principles. That's fine, and I am all for different business models. But as your company is not paying for Python, and apparently not willing to sharing its own source code, I don't feel responsible to fix this inconvenience for them. Now, if you were to contribute a backwards compatibility patch that allowed e.g. importing bytecode generated by Python 1.5.2 into Python 2.1, I would gladly incorporate that! My priorities are often affected by what people are willing to contribute... --Guido van Rossum (home page: http://www.python.org/~guido/) From nas at arctrix.com Mon Feb 5 16:28:18 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 5 Feb 2001 07:28:18 -0800 Subject: [Python-Dev] insertdict slower? In-Reply-To: <3A7E84D3.4D111F0F@lemburg.com>; from mal@lemburg.com on Mon, Feb 05, 2001 at 11:47:47AM +0100 References: <3A7E84D3.4D111F0F@lemburg.com> Message-ID: <20010205072818.B5287@glacier.fnational.com> On Mon, Feb 05, 2001 at 11:47:47AM +0100, M.-A. Lemburg wrote: > Yes, I ran the tests on an AMK K6 233. Our model is a bit older. Neil -- import binascii; print binascii.unhexlify('4a' '75737420616e6f7468657220507974686f6e20626f74') From alex_c at MIT.EDU Mon Feb 5 16:36:29 2001 From: alex_c at MIT.EDU (Alex Coventry) Date: Mon, 5 Feb 2001 10:36:29 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? Message-ID: <200102051536.KAA18060@w20-575-36.mit.edu> > This functionality is alrady available through the os.spawn*() family > of functions. This is supported on Unix and Windows. Oh, I see, I can use the P_WAIT option. Sorry to bother you all, then. Alex. From gvwilson at ca.baltimore.com Mon Feb 5 16:42:50 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 5 Feb 2001 10:42:50 -0500 Subject: [Python-Dev] re: BOFs / sets / iteration Message-ID: <000001c08f8a$4c715b10$770a0a0a@nevex.com> Hi, folks. Given feedback so far, I'd like to hold the BOF on sets at lunch on Wednesday; I'll ask Barbara Fuller to arrange a room, and send out notice. I'd also like to know if there's enough interest in iterators to arrange a BOF for Tuesday lunch (the only other slot that's available right now). Please let me know; if I get more than half a dozen responses, I'll ask Barbara to set that up as well. Thanks Greg From akuchlin at cnri.reston.va.us Mon Feb 5 16:48:04 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 5 Feb 2001 10:48:04 -0500 Subject: [Python-Dev] insertdict slower? In-Reply-To: <20010205072818.B5287@glacier.fnational.com>; from nas@arctrix.com on Mon, Feb 05, 2001 at 07:28:18AM -0800 References: <3A7E84D3.4D111F0F@lemburg.com> <20010205072818.B5287@glacier.fnational.com> Message-ID: <20010205104804.D733@thrak.cnri.reston.va.us> On Mon, Feb 05, 2001 at 07:28:18AM -0800, Neil Schemenauer wrote: >On Mon, Feb 05, 2001 at 11:47:47AM +0100, M.-A. Lemburg wrote: >> Yes, I ran the tests on an AMK K6 233. Hey, give my computer back! --amk From guido at digicool.com Mon Feb 5 16:46:44 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 10:46:44 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: Your message of "Sun, 04 Feb 2001 23:58:28 EST." References: Message-ID: <200102051546.KAA32113@cj20424-a.reston1.va.home.com> > Don't know about Macs (although I believe the Metrowerks libc can be still > be *configured* to swap \r and \n there), but it caught a bug in Python in > the 2.0 release cycle (where Python was opening .pyc files in text mode by > mistake, but only on Windows). Well, actually, it didn't catch anything, it > caused import from .pyc to fail silently. Having *some* specific gross > thing fail every time is worth something. Sounds to me that we'd caught this sooner without the \r\n gimmic. :-) > But the \r\n thingie can be pushed into the extended header instead. Here's > an idea for "the new" magic number, assuming it must remain 4 bytes: > > byte 0: \217 will never change > byte 1: 'P' will never change > byte 2: high-order byte of version number > byte 3: low-order byte of version number > > "Version number" is an unsigned 16-bit int, starting at 0 and incremented by > 1 from time to time. 64K changes may even be enough to get us to Python > 3000 . A separate text file should record the history of version > number changes, associating each with the date, release and reason for > change (the CVS log for import.c used to be good about recording the reason, > but not anymore). > > Then we can keep a 4-byte magic number, Eric can have his invariant two-byte > tag at the start, and it's still possible to compare "version numbers" > easily for more than just equality (read the magic number as a "network > standard" unsigned int, and it's a total ordering with earlier versions > comparing less than later ones). The other nifty PNG sanity-checking tricks > can also move into the extended header. +1 from me. I'm +0 on adding more magic to the marshalled code. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 5 16:55:39 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 10:55:39 -0500 Subject: [Python-Dev] Alternative to os.system that takes a list of strings? In-Reply-To: Your message of "Mon, 05 Feb 2001 09:57:03 EST." <200102051457.JAA17949@w20-575-36.mit.edu> References: <200102051430.JAA17890@w20-575-36.mit.edu> <200102051434.JAA31491@cj20424-a.reston1.va.home.com> <200102051457.JAA17949@w20-575-36.mit.edu> Message-ID: <200102051555.KAA32193@cj20424-a.reston1.va.home.com> > > This functionality is alrady available through the os.spawn*() family > > of functions. This is supported on Unix and Windows. > > Hi, Guido. The only problem with os.spawn* is that it forks off a new > process, and I don't know how to wait for the new process to finish. Use os.P_WAIT for the mode argument. > > BTW, what do you mean by "upstream"? > > I thought it might be a useful thing to include in the python > distribution. Which is hardly "upstream" from python-dev -- this is where it's decided! :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Mon Feb 5 17:10:33 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Mon, 5 Feb 2001 11:10:33 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <200102051546.KAA32113@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 05, 2001 at 10:46:44AM -0500 References: <200102051546.KAA32113@cj20424-a.reston1.va.home.com> Message-ID: <20010205111033.A24383@thyrsus.com> Guido van Rossum : > > But the \r\n thingie can be pushed into the extended header > > instead. Here's an idea for "the new" magic number, assuming it > > must remain 4 bytes: > > > > byte 0: \217 will never change > > byte 1: 'P' will never change > > byte 2: high-order byte of version number > > byte 3: low-order byte of version number > > > > "Version number" is an unsigned 16-bit int, starting at 0 and > > incremented by 1 from time to time. 64K changes may even be > > enough to get us to Python 3000 . A separate text file > > should record the history of version number changes, associating > > each with the date, release and reason for change (the CVS log for > > import.c used to be good about recording the reason, but not > > anymore). > > > > Then we can keep a 4-byte magic number, Eric can have his > > invariant two-byte tag at the start, and it's still possible to > > compare "version numbers" easily for more than just equality (read > > the magic number as a "network standard" unsigned int, and it's a > > total ordering with earlier versions comparing less than later > > ones). The other nifty PNG sanity-checking tricks can also move > > into the extended header. > > +1 from me. I'm +0 on adding more magic to the marshalled code. Likewise from me -- that is, +1 on Tim's proposed format and +0 on stuff like hashes and embedded source pathnames and stuff. As Tim observed earlier, I just want to see some progress made; I'm not picky about the low-level details on this one, though I'll be happy with the invariant tag and the PNG-style sanity check. -- Eric S. Raymond "Extremism in the defense of liberty is no vice; moderation in the pursuit of justice is no virtue." -- Barry Goldwater (actually written by Karl Hess) From mal at lemburg.com Mon Feb 5 17:58:21 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 17:58:21 +0100 Subject: [Python-Dev] insertdict slower? References: <3A7E84D3.4D111F0F@lemburg.com> <20010205072818.B5287@glacier.fnational.com> <20010205104804.D733@thrak.cnri.reston.va.us> Message-ID: <3A7EDBAD.95BCA583@lemburg.com> Andrew Kuchling wrote: > > On Mon, Feb 05, 2001 at 07:28:18AM -0800, Neil Schemenauer wrote: > >On Mon, Feb 05, 2001 at 11:47:47AM +0100, M.-A. Lemburg wrote: > >> Yes, I ran the tests on an AMK K6 233. > > Hey, give my computer back! :-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From Jason.Tishler at dothill.com Mon Feb 5 18:27:21 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Mon, 5 Feb 2001 12:27:21 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: ; from tim.one@home.com on Sun, Feb 04, 2001 at 03:13:29AM -0500 References: <14972.10746.34425.26722@anthem.wooz.org> Message-ID: <20010205122721.J812@dothill.com> On Sun, Feb 04, 2001 at 03:13:29AM -0500, Tim Peters wrote: > [Barry A. Warsaw] > > So, let's tease out what the Right solution would be, and then > > see how close or if we can get there for 2.1. I've no clue what > > behavior Mac and Windows users would /like/ to see -- what would > > be most natural for them? On 2001-Jan-11 07:56, Jason Tishler wrote: > I have created a (hacky) patch, that solves this problem for both Cygwin and > Win32. I can redo it so that it only affects Cygwin and leaves the Win32 > functionality alone. I would like to upload it for discussion... Part of my motivation when submitting patch 103154, was to attempt to elicit the "right" solution. > I don't understand what Cygwin does; here from a Cygwin bash shell session: > > ... > > So best I can tell, they're like Steven: working with a case-insensitive > filesystem but trying to make Python insist that it's not, and what basic > tools there do about case is seemingly random (wc doesn't care, shell > expansion does, touch doesn't, rm doesn't (not shown) -- maybe it's just > shell expansion that's trying to pretend this is Unix? Sorry, but I don't agree with your assessment that Cygwin's treatment of case is "seemingly random." IMO, Cygwin behaves appropriately regarding case for a case-insensitive, but case-preserving file system. The only "inconsistency" that you found is just one of bash's idiosyncrasies -- how it handles glob-ing. Note that one can use "shopt -s nocaseglob" to get case-insensitive glob-ing with bash on Cygwin *and* UNIX. > So I view the current rules as inexplicable: they're neither > platform-independent nor consistent with the platform's natural behavior > (unless that platform has case-sensitive filesystem semantics). Agreed. > Bottom line: for the purpose of import-from-file (and except for > case-destroying filesystems, where PYTHONCASEOK is the only hope), we *can* > make case-insensitive case-preserving filesystems "act like" they were > case-sensitive with modest effort. I feel that the above behavior would be best for Cygwin Python. I hope that Steven's patch (i.e., 103495) or a modified version of it remains as part of Python CVS. > We can't do the reverse. That would > lead to explainable rules and maximal portability. Sorry but I don't grok the above. Tim, can you try again? BTW, importing of builtin modules is case-sensitive even on platforms such as Windows. Wouldn't it be more consistent if all imports regardless of type were case-sensitive? Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From akuchlin at mems-exchange.org Mon Feb 5 18:32:31 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Mon, 05 Feb 2001 12:32:31 -0500 Subject: [Python-Dev] PEP announcements, and summaries Message-ID: One thing about the reaction to the 2.1 alphas is that many people seem *surprised* by some of the changes, even though PEPs have been written, discussed, and mentioned in python-dev summaries. Maybe the PEPs and their status need to be given higher visibility; I'd suggest sending a brief note of status changes (new draft PEPs, acceptance, rejection) to comp.lang.python.announce. Also, I'm wondering if it's worth continuing the python-dev summaries, because, while they get a bunch of hits on news sites such as Linux Today and may be good PR, I'm not sure that they actually help Python development. They're supposed to let people offer timely comments on python-dev discussions while it's still early enough to do some good, but that doesn't seem to happen; I don't see python-dev postings that began with something like "The last summary mentioned you were talking about X. I use X a lot, and here's what I think: ...". Is anything much lost if the summaries cease? --amk From esr at thyrsus.com Mon Feb 5 18:56:59 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Mon, 5 Feb 2001 12:56:59 -0500 Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: ; from akuchlin@mems-exchange.org on Mon, Feb 05, 2001 at 12:32:31PM -0500 References: Message-ID: <20010205125659.B25297@thyrsus.com> Andrew Kuchling : > Is anything much lost if the summaries cease? I think not, but others may differ. -- Eric S. Raymond Conservatism is the blind and fear-filled worship of dead radicals. From fredrik at effbot.org Mon Feb 5 19:10:15 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Mon, 5 Feb 2001 19:10:15 +0100 Subject: [Python-Dev] Case sensitive import. References: <14972.10746.34425.26722@anthem.wooz.org> <20010205122721.J812@dothill.com> Message-ID: <028701c08f9e$e65886e0$e46940d5@hagrid> Jason wrote: > BTW, importing of builtin modules is case-sensitive even on platforms > such as Windows. Wouldn't it be more consistent if all imports > regardless of type were case-sensitive? umm. what kind of imports are not case-sensitive today? >>> import strOP # builtin Traceback (innermost last): File " ", line 1, in ? ImportError: No module named strOP >>> import stringIO # python Traceback (innermost last): File " ", line 1, in ? NameError: Case mismatch for module name stringIO (filename C:\py152\lib\StringIO.py) >>> import _Tkinter # binary extension Traceback (innermost last): File " ", line 1, in ? NameError: Case mismatch for module name _Tkinter (filename C:\py152\_tkinter.pyd) Cheers /F From pedroni at inf.ethz.ch Mon Feb 5 19:20:33 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Mon, 5 Feb 2001 19:20:33 +0100 (MET) Subject: [Python-Dev] PEP announcements, and summaries Message-ID: <200102051820.TAA20238@core.inf.ethz.ch> Hi. > One thing about the reaction to the 2.1 alphas is that many people > seem *surprised* by some of the changes, even though PEPs have been > written, discussed, and mentioned in python-dev summaries. Maybe the > PEPs and their status need to be given higher visibility; I'd suggest > sending a brief note of status changes (new draft PEPs, acceptance, > rejection) to comp.lang.python.announce. > > Also, I'm wondering if it's worth continuing the python-dev summaries, > because, while they get a bunch of hits on news sites such as Linux > Today and may be good PR, I'm not sure that they actually help Python > development. They're supposed to let people offer timely comments on > python-dev discussions while it's still early enough to do some good, > but that doesn't seem to happen; I don't see python-dev postings that > began with something like "The last summary mentioned you were talking > about X. I use X a lot, and here's what I think: ...". Is anything > much lost if the summaries cease? > Before joining python-dev, I always read the summaries very carefully and I found them useful and informing, on the other hand my situation of being a jython devel was a bit special. Some opinions from a somehow external viewpoint: - more emphasis on the PEPs and their status changes could help. - people should be able to trust PEP contents, they should really describe what is going happen. Two examples: - what was described in weak-ref PEP was changed just before realesing the alpha that contained weak-ref support, because it was discovered that the proposal could not be implemented in jython. - nested scope PEP: the PEP indicated as most likely impl. way flat closures, and that'a what is in a2. from _ import * was not indicated as a big issue. Now that seems such an issue, and maybe chained closures are needed or some other gymnic with a performance impact. Now decisions and changes have to be made under time constraints and it seems not clear what the outcome will be, and wheter it will have the required long-term quality. regards, Samuele Pedroni. From mal at lemburg.com Mon Feb 5 19:32:00 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 19:32:00 +0100 Subject: [Python-Dev] PEP announcements, and summaries References: Message-ID: <3A7EF1A0.EDA4AD24@lemburg.com> Andrew Kuchling wrote: > > One thing about the reaction to the 2.1 alphas is that many people > seem *surprised* by some of the changes, even though PEPs have been > written, discussed, and mentioned in python-dev summaries. Maybe the > PEPs and their status need to be given higher visibility; I'd suggest > sending a brief note of status changes (new draft PEPs, acceptance, > rejection) to comp.lang.python.announce. > > Also, I'm wondering if it's worth continuing the python-dev summaries, > because, while they get a bunch of hits on news sites such as Linux > Today and may be good PR, I'm not sure that they actually help Python > development. They're supposed to let people offer timely comments on > python-dev discussions while it's still early enough to do some good, > but that doesn't seem to happen; I don't see python-dev postings that > began with something like "The last summary mentioned you were talking > about X. I use X a lot, and here's what I think: ...". Is anything > much lost if the summaries cease? I think that the Python community would lose some touch with the Python development process and there are currently no other clearly visible resources which a Python user can link to unless he or she happens to know of the existence of python-dev. Some things which could be done to improve this: * add a link to the python-dev archive directly from www.python.org * summarize the development process somewhere on python.org and add a link "development" to the page titles * fix the "community" link to point to a page which provides links to all the community tools available for Python on the web, e.g. Starship, Parnassus, SF-tools, FAQTS, etc. * add a section "devtools" which points programmers to existing Python programming tools such as IDLE, PythonWare, Wing IDE, BlackAdder, etc. And while I'm at it :) * add a section "applications" to produce some more awareness that Python is being used in real life applications * some kind of self-maintained projects page would also be a nice thing to have, e.g. a Wiki-style reference to projects seeking volunteers to help; this could also be referenced in the community section -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From esr at thyrsus.com Mon Feb 5 19:42:30 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Mon, 5 Feb 2001 13:42:30 -0500 Subject: [Python-Dev] Heads up on library reorganization Message-ID: <20010205134230.A25426@thyrsus.com> At LWE, Guido and I brainstormed a thorough reorganization of the Python library together. There will be a PEP coming out of this; actually two PEPs. One will reorganize the library namespace and set up procedures for forward migration and future changes. Another (not yet begun) will describe policy criteria for what goes into the library in the future. The draft on reorganization is still kind of raw, but I'll share it with anyone that has a particular interest in this area. We have a new library-hierarchy map already, but I'm deliberately not posting that publicly yet in order to avoid starting a huge debate about the details before Guido and I actually have a well-worked-out proposal to present. Guido, of course, is still up to his ears in post-LWE mail and work cleanup. Barry, this is why I have not submitted the ternary-select PEP yet. The library reorg is more important and should get done first. -- Eric S. Raymond Everything you know is wrong. But some of it is a useful first approximation. From guido at digicool.com Mon Feb 5 19:37:39 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 13:37:39 -0500 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: Your message of "Mon, 05 Feb 2001 07:02:22 PST." <20010205070222.A5287@glacier.fnational.com> References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> Message-ID: <200102051837.NAA00833@cj20424-a.reston1.va.home.com> > On Sun, Feb 04, 2001 at 11:47:26PM -0500, Guido van Rossum wrote: > > Yes, I've often thought that we should be able to heal the split for > > 95% by using a few well-aimed tricks like this. Later... > > I was playing around this weekend with the class/type problem. > Without too much effort I had an interpreter that could to things > like this: > > >>> class MyInt(type(1)): > ... pass > ... > >>> i = MyInt(10) > >>> i > 10 > >>> i + 1 > 11 Now, can you do things like this: >>> from types import * >>> class MyInt(IntType): # add a method def add1(self): return self+1 >>> i = MyInt(10) >>> i.add1() 11 >>> and like this: >>> class MyInt(IntType): # override division def __div__(self, other): return float(self) / other def __rdiv__(self, other): return other / float(self) >>> i = MyInt(10) >>> i/3 0.33333333333333331 >>> I'm not asking for adding new instance variables (slots), but that of course would be the next step of difficulty up. > The major changes were allowing PyClassObject to subclass types > (ie. changing PyClass_Check(op) to (PyClass_Check(op) || > PyType_Check(op))), writing a _PyType_Lookup function, and making > class_lookup use it. Yeah, but that's still nasty. We should strive for unifying PyClass and PyType instead of having both around. > The experiment has convinced me that we can allow subclasses of > types quite easily without major changes. It has also given me > some ideas on "the right way" to solve this problem. The rough > scheme I can up yesterday goes like this: > p> PyObject { > int ob_refcnt; > PyClass ob_class; (plus type-specific fields I suppose) > } > > PyClass { > PyObject_HEAD > char *cl_name; > getattrfunc cl_getattr; > PyMethodTable *cl_methods; > } > > PyMethodTable { > binaryfunc nb_add; > binaryfunc nb_sub; > ... > } > > When calling a method on a object the interpreter would first > check for a direct method and if that does not exist then call > cl_getattr. Obviously there are still a few details to be worked > out. :-) Yeah... Like you should be able to ask for ListType.append and get an unbound built-in method back, which can be applied to a list: ListType.append([], 1) === [].append(1) And ditto for operators: IntType.__add__(1, 2) === 1+2 And a C API like PyNumber_Add(x, y) should be equivalent to using x.__add__(y), too. --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at lemburg.com Mon Feb 5 19:45:10 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 19:45:10 +0100 Subject: [Python-Dev] re: BOFs / sets / iteration References: <000001c08f8a$4c715b10$770a0a0a@nevex.com> Message-ID: <3A7EF4B6.9BBD45EC@lemburg.com> Greg Wilson wrote: > > Hi, folks. Given feedback so far, I'd like to hold the > BOF on sets at lunch on Wednesday; I'll ask Barbara Fuller > to arrange a room, and send out notice. Great. > I'd also like to know if there's enough interest in iterators > to arrange a BOF for Tuesday lunch (the only other slot that's > available right now). Please let me know; if I get more than > half a dozen responses, I'll ask Barbara to set that up as well. That's one from me :) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From nas at arctrix.com Mon Feb 5 20:04:22 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 5 Feb 2001 11:04:22 -0800 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: <200102051837.NAA00833@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 05, 2001 at 01:37:39PM -0500 References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> <200102051837.NAA00833@cj20424-a.reston1.va.home.com> Message-ID: <20010205110422.A5893@glacier.fnational.com> On Mon, Feb 05, 2001 at 01:37:39PM -0500, Guido van Rossum wrote: > Now, can you do things like this: [example cut] No, it would have to be written like this: >>> from types import * >>> class MyInt(IntType): # add a method def add1(self): return self.value+1 >>> i = MyInt(10) >>> i.add1() 11 >>> Note the value attribute. The IntType.__init__ method is basicly: def __init__(self, value): self.value = value > > PyObject { > > int ob_refcnt; > > PyClass ob_class; > > (plus type-specific fields I suppose) Yes, the instance attributes. In this scheme all objects are instances of some class. > Yeah... Like you should be able to ask for ListType.append and get an > unbound built-in method back, which can be applied to a list: > > ListType.append([], 1) === [].append(1) Right. My changes on the weekend where quite close to making this work. Neil From ping at lfw.org Mon Feb 5 20:04:16 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Mon, 5 Feb 2001 11:04:16 -0800 (PST) Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: <000301c08eb5$876baf20$770a0a0a@nevex.com> Message-ID: On Sun, 4 Feb 2001, Greg Wilson wrote: > Question: would the current proposal allow NumPy arrays (just as an > example) to support both: > > for index:value in numPyArray: > > where 'index' would get tuples like '(0, 3, 2)' for a 3D array, *and* > > for (i, j, k):value in numPyArray: Naturally. Anything that could normally be bound on the left side of an assignment (or current for loop) could go in the spot on either side of the colon. -- ?!ng From akuchlin at cnri.reston.va.us Mon Feb 5 20:11:39 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 5 Feb 2001 14:11:39 -0500 Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: <3A7EF1A0.EDA4AD24@lemburg.com>; from mal@lemburg.com on Mon, Feb 05, 2001 at 07:32:00PM +0100 References: <3A7EF1A0.EDA4AD24@lemburg.com> Message-ID: <20010205141139.K733@thrak.cnri.reston.va.us> On Mon, Feb 05, 2001 at 07:32:00PM +0100, M.-A. Lemburg wrote: >Some things which could be done to improve this: >* add a link to the python-dev archive directly from www.python.org >* summarize the development process somewhere on python.org and > add a link "development" to the page titles We do need a set of "Hacker's Guide to Python Development" Web pages to collect that sort of thing; I have some small pieces of such a thing, written long ago and never released, but they'd need to be updated and finished off. And while I'm at it, too, I'd like to suggest that, since python-dev seems to be getting out of touch with the larger Python community, after 2.1final, rather than immediately leaping back into language hacking, we should work on bringing the public face of the community up to date: * Pry python.org out of CNRI's cold dead hands, and begin maintaining it again. * Start moving on the Catalog-SIG again (yes, I know this is my task) * Work on the Batteries Included proposals & required infrastructure * Try doing some PR for 2.1. --amk From ping at lfw.org Mon Feb 5 20:15:18 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Mon, 5 Feb 2001 11:15:18 -0800 (PST) Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: <3A7E89B9.B90D36DF@lemburg.com> Message-ID: On Mon, 5 Feb 2001, M.-A. Lemburg wrote: > Two things: > > 1. the proposed syntax key:value does away with the > easy to parse Python block statement syntax Oh, come on. Slices and dictionary literals use colons too, and there's nothing wrong with that. Blocks are introduced by a colon at the *end* of a line. > 2. why can't we use the old 'for x,y,z in something:' syntax > and instead add iterators to the objects in question ? > > for key, value in object.iterator(): > ... Because there's no good answer for "what does iterator() return?" in this design. (Trust me; i did think this through carefully.) Try it. How would you implement the iterator() method? The PEP *is* suggesting that we add iterators to the objects -- just not that we explicitly call them. In the 'for' loop you've written, iterator() returns a sequence, not an iterator. -- ?!ng From gvwilson at ca.baltimore.com Mon Feb 5 20:22:50 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 5 Feb 2001 14:22:50 -0500 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: Message-ID: <002201c08fa9$079a1f80$770a0a0a@nevex.com> > > Question: would the current proposal allow NumPy arrays (just as an > > example) to support both: > > for index:value in numPyArray: > > where 'index' would get tuples like '(0, 3, 2)' for a 3D > > array, *and* > > > > for (i, j, k):value in numPyArray: > Ka-Ping Yee: > Naturally. Anything that could normally be bound on the left > side of an assignment (or current for loop) could go in the > spot on either side of the colon. OK, now here's the hard one. Clearly, (a) for i in someList: has to continue to mean "iterate over the values". We've agreed that: (b) for k:v in someDict: means "iterate through the items". (a) looks like a special case of (b). I therefore asked my colleagues to guess what: (c) for x in someDict: did. They all said, "Iterates through the _values_ in the dict", by analogy with (a). I then asked, "How do you iterate through the keys in a dict, or the indices in a list?" They guessed: (d) for x: in someContainer: (note the colon trailing the iterator variable name). I think that the combination of (a) and (b) implies (c), which leads in turn to (d). Is this what we want? I gotta say, when I start thinking about how many problems my students are going to bring me when accidentally adding or removing a colon in the middle of a 'for' statement changes the iteration space from keys to values, and I start feeling queasy... Thanks, Greg From ping at lfw.org Mon Feb 5 20:26:53 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Mon, 5 Feb 2001 11:26:53 -0800 (PST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <002201c08fa9$079a1f80$770a0a0a@nevex.com> Message-ID: On Mon, 5 Feb 2001, Greg Wilson wrote: > OK, now here's the hard one. Clearly, > > (a) for i in someList: > > has to continue to mean "iterate over the values". We've agreed > that: > > (b) for k:v in someDict: > > means "iterate through the items". (a) looks like a special case > of (b). I therefore asked my colleagues to guess what: > > (c) for x in someDict: > > did. They all said, "Iterates through the _values_ in the dict", > by analogy with (a). > > I then asked, "How do you iterate through the keys in a dict, or > the indices in a list?" They guessed: > > (d) for x: in someContainer: > > (note the colon trailing the iterator variable name). I think that > the combination of (a) and (b) implies (c), which leads in turn to > (d). Is this what we want? I gotta say, when I start thinking about > how many problems my students are going to bring me when accidentally > adding or removing a colon in the middle of a 'for' statement changes > the iteration space from keys to values, and I start feeling queasy... The PEP explicitly proposes that (c) be an error, because i anticipated and specifically wanted to avoid this ambiguity. Have you had a good look at it? I think your survey shows that the PEP made the right choices. That is, it supports the position that if 'for key:value' is supported, then 'for key:' and 'for :value' should be supported, but 'for x in dict:' should not. It also shows that 'for index:' should be supported on sequences, which the PEP suggests. -- ?!ng From tim.one at home.com Mon Feb 5 20:37:43 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 14:37:43 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <3A7E8B37.E855DF81@lemburg.com> Message-ID: [M.-A. Lemburg] > Side note: the magic can also change due to command line options > being used, e.g. -U will bump the magic number by 1. Note that this (-U) is the only such case. Unless people are using private Python variants and adding their own cmdline switches that fiddle the magic number . From tim.one at home.com Mon Feb 5 20:37:46 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 14:37:46 -0500 Subject: [Python-Dev] Identifying magic prefix on Python files? In-Reply-To: <200102051546.KAA32113@cj20424-a.reston1.va.home.com> Message-ID: > > byte 0: \217 will never change > > byte 1: 'P' will never change > > byte 2: high-order byte of version number > > byte 3: low-order byte of version number [Guido] > +1 from me. I'm +0 on adding more magic to the marshalled code. Note that the suggested scheme cannot tolerate -U magically adding 1 to the magic number, without getting strained ("umm, OK, we'll bump it by 2 when we do it by hand, and then -U gets all the odd numbers"; "umm, OK, we'll use 'P' for regular Python and 'U' for Unicode Python"; etc). So I say the marshalled code at least needs to grow a flag field to handle -U and any future extensions. The "extended header" in the marshalled blob should also begin with a 4-byte field giving the length of the extended header. plan-for-change-ly y'rs - tim From guido at digicool.com Mon Feb 5 20:37:28 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 14:37:28 -0500 Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: Your message of "Mon, 05 Feb 2001 14:11:39 EST." <20010205141139.K733@thrak.cnri.reston.va.us> References: <3A7EF1A0.EDA4AD24@lemburg.com> <20010205141139.K733@thrak.cnri.reston.va.us> Message-ID: <200102051937.OAA01402@cj20424-a.reston1.va.home.com> > On Mon, Feb 05, 2001 at 07:32:00PM +0100, M.-A. Lemburg wrote: > >Some things which could be done to improve this: > >* add a link to the python-dev archive directly from www.python.org > >* summarize the development process somewhere on python.org and > > add a link "development" to the page titles Andrew: > We do need a set of "Hacker's Guide to Python Development" Web pages > to collect that sort of thing; I have some small pieces of such a > thing, written long ago and never released, but they'd need to be > updated and finished off. > > And while I'm at it, too, I'd like to suggest that, since python-dev > seems to be getting out of touch with the larger Python community, > after 2.1final, rather than immediately leaping back into language > hacking, we should work on bringing the public face of the community > up to date: > > * Pry python.org out of CNRI's cold dead hands, and begin maintaining > it again. Agreed. I am getting together with some folks at Digital Creations this week to get started with a Zope-based python.org website (to be run at new.python.org for now). This will be run somewhat like zope.org, i.e. members can post their own contents in their home directory, and after review such items can be linked directly from the home page, or something like that. The software to be used is DC's brand new Content Management Framework (announced in a press conference last Thursday; I can't find anything on the web yet). (Hmm, I wonder if we could run this on starship.python.net instead? That machine probably has more spare cycles.) > * Start moving on the Catalog-SIG again (yes, I know this is my task) > > * Work on the Batteries Included proposals & required infrastructure > > * Try doing some PR for 2.1. Joya Subudhi of Foretec has been doing a lot of Python PR work -- she arranged about a dozen press interviews for me last week at LinuxWorld Expo. She can undoubtedly do a good job of pushing the 2.1 announcement into the world, once we've released it. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 5 20:43:45 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 14:43:45 -0500 Subject: [Python-Dev] import Tkinter fails In-Reply-To: Your message of "Mon, 05 Feb 2001 14:35:51 EST." <20010205143551.M733@thrak.cnri.reston.va.us> References: <200102050012.TAA27410@cj20424-a.reston1.va.home.com> <20010205143551.M733@thrak.cnri.reston.va.us> Message-ID: <200102051943.OAA04941@cj20424-a.reston1.va.home.com> > On Sun, Feb 04, 2001 at 07:12:44PM -0500, Guido van Rossum wrote: > >On Unix, either when running from the build directory, or when running > >the installed binary, "import Tkinter" fails. It seems that > >Lib/lib-tk is (once again) dropped from the default path. Andrew replied (in private mail): > Is this the case with the current CVS tree (as of Feb. 5)? I can't > reproduce the problem and don't see why this would happen. Oops... I got rid of my old Modules/Setup, and tried again -- then it worked. I should have heeded the warnings about Setup.dist being newer than Setup! Sorry for the false alarm! --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at lemburg.com Mon Feb 5 20:45:51 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 20:45:51 +0100 Subject: [Python-Dev] re: Sets BOF / for in dict References: Message-ID: <3A7F02EF.9119F46C@lemburg.com> Ka-Ping Yee wrote: > > On Mon, 5 Feb 2001, M.-A. Lemburg wrote: > > Two things: > > > > 1. the proposed syntax key:value does away with the > > easy to parse Python block statement syntax > > Oh, come on. Slices and dictionary literals use colons too, > and there's nothing wrong with that. Blocks are introduced > by a colon at the *end* of a line. Slices and dictionary enclose the two parts in brackets -- this places the colon into a visible context. for ... in ... : does not provide much of a context. > > 2. why can't we use the old 'for x,y,z in something:' syntax > > and instead add iterators to the objects in question ? > > > > for key, value in object.iterator(): > > ... > > Because there's no good answer for "what does iterator() return?" > in this design. (Trust me; i did think this through carefully.) > Try it. How would you implement the iterator() method? The .iterator() method would have to return an object which provides an iterator API (at C level to get the best performance). For dictionaries, this object could carry the needed state (current position in the dictionary table) and use the PyDict_Next() for the internals. Matrices would have to carry along more state (one integer per dimension) and could access the internal matrix representation directly using C functions. This would give us: speed, flexibility and extensibility which the syntax hacks cannot provide; e.g. how would you specify to iterate backwards over a sequence using that notation or diagonal for a matrix ? > The PEP *is* suggesting that we add iterators to the objects -- > just not that we explicitly call them. In the 'for' loop you've > written, iterator() returns a sequence, not an iterator. No, it should return a forward iterator. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Mon Feb 5 20:49:39 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 14:49:39 -0500 Subject: [Python-Dev] Adding opt-in pymalloc + alpha3 In-Reply-To: <3A7E881E.64D64F08@lemburg.com> Message-ID: [MAL] > ... > Even though I don't think that adding opt-in code matters > much w/r to stability of the rest of the code, I still think > that we ought to insert a third alpha release to hammer a bit > more on weak refs and nested scopes. > > These two additions are major new features in Python 2.1 which > were added very late in the release cycle and haven't had much > testing in the field. > > Thoughts ? IMO, everyone who is *likely* to pick up an alpha release has already done so. It won't get significantly broader or deeper hammering until there's a beta. So I'm opposed to a third alpha unless a significant number of bugs are unearthed by the current alpha (which still has a couple weeks to go before the scheduled beta). if-you-won't-eat-two-hot-dogs-it-won't-help-if-i-offer-you- three -ly y'rs - tim From mal at lemburg.com Mon Feb 5 20:50:26 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 20:50:26 +0100 Subject: [Python-Dev] Identifying magic prefix on Python files? References: Message-ID: <3A7F0402.7134C6DF@lemburg.com> Tim Peters wrote: > > [M.-A. Lemburg] > > Side note: the magic can also change due to command line options > > being used, e.g. -U will bump the magic number by 1. > > Note that this (-U) is the only such case. Unless people are using private > Python variants and adding their own cmdline switches that fiddle the magic > number . I think that future optimizers or special combinations of the yet-to-be-designed Python compiler/VM toolkit will make some use of this feature too. It is currently the only way to prevent the interpreter from loading code which it potentially cannot execute. When redesigning the import magic, we should be careful to allow future combinations of compiler/VM to introduce new opcodes etc. so there will have to be some field for them to use too. The -U trick is really only a hack in that direction (since it modifies the compiler and thus the generated byte code). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From ping at lfw.org Mon Feb 5 20:52:50 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Mon, 5 Feb 2001 11:52:50 -0800 (PST) Subject: [Python-Dev] re: Sets BOF / for in dict In-Reply-To: <3A7F02EF.9119F46C@lemburg.com> Message-ID: On Mon, 5 Feb 2001, M.-A. Lemburg wrote: > Slices and dictionary enclose the two parts in brackets -- this > places the colon into a visible context. for ... in ... : does > not provide much of a context. For crying out loud! '\':' requires that you tokenize the string before you know that the colon is part of the string. Triple-quotes force you to tokenize carefully too. There is *nothing* that this stay-away-from-colons argument buys you. For a human skimming over source code -- i repeat, the visual hint is "colon at the END of a line". > > Because there's no good answer for "what does iterator() return?" > > in this design. (Trust me; i did think this through carefully.) > > Try it. How would you implement the iterator() method? > > The .iterator() method would have to return an object which > provides an iterator API (at C level to get the best performance). Okay, provide an example. Write this iterator() method in Python. Now answer: how does 'for' know whether the thing to the right of 'in' is an iterator or a sequence? > For dictionaries, this object could carry the needed state > (current position in the dictionary table) and use the PyDict_Next() > for the internals. Matrices would have to carry along more state > (one integer per dimension) and could access the internal > matrix representation directly using C functions. This is already exactly what the PEP proposes for the implementation of sq_iter. > This would give us: speed, flexibility and extensibility > which the syntax hacks cannot provide; The PEP is not just about syntax hacks. It's an iterator protocol. It's clear that you haven't read it. *PLEASE* read the PEP before continuing to discuss it. I quote: | Rationale | | If all the parts of the proposal are included, this addresses many | concerns in a consistent and flexible fashion. Among its chief | virtues are the following three -- no, four -- no, five -- points: | | 1. It provides an extensible iterator interface. | | 2. It resolves the endless "i indexing sequence" debate. | | 3. It allows performance enhancements to dictionary iteration. | | 4. It allows one to provide an interface for just iteration | without pretending to provide random access to elements. | | 5. It is backward-compatible with all existing user-defined | classes and extension objects that emulate sequences and | mappings, even mappings that only implement a subset of | {__getitem__, keys, values, items}. I can take out the Monty Python jokes if you want. I can add more jokes if that will make you read it. Just read it, i beg you. > e.g. how would you > specify to iterate backwards over a sequence using that notation > or diagonal for a matrix ? No differently from what you are suggesting, at the surface: for item in sequence.backwards(): for item in matrix.diagonal(): The difference is that the thing on the right of 'in' is always considered a sequence-like object. There is no ambiguity and no magic rule for deciding when it's a sequence and when it's an iterator. -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From barry at digicool.com Mon Feb 5 21:07:12 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Mon, 5 Feb 2001 15:07:12 -0500 Subject: [Python-Dev] Heads up on library reorganization References: <20010205134230.A25426@thyrsus.com> Message-ID: <14975.2032.104397.905163@anthem.wooz.org> >>>>> "ESR" == Eric S Raymond writes: ESR> Barry, this is why I have not submitted the ternary-select ESR> PEP yet. The library reorg is more important and should get ESR> done first. No problem, and agreed. Whenever you're ready with a PEP, just send me a draft and I'll give you a number. -Barry From guido at digicool.com Mon Feb 5 21:22:27 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 15:22:27 -0500 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: Your message of "Mon, 05 Feb 2001 11:26:53 PST." References: Message-ID: <200102052022.PAA05449@cj20424-a.reston1.va.home.com> [GVW] > > (c) for x in someDict: > > > > did. They all said, "Iterates through the _values_ in the dict", > > by analogy with (a). [Ping] > The PEP explicitly proposes that (c) be an error, because i > anticipated and specifically wanted to avoid this ambiguity. > Have you had a good look at it? > > I think your survey shows that the PEP made the right choices. > That is, it supports the position that if 'for key:value' is > supported, then 'for key:' and 'for :value' should be supported, > but 'for x in dict:' should not. It also shows that 'for index:' > should be supported on sequences, which the PEP suggests. But then we should review the wisdom of using "if x in dict" as a shortcut for "if dict.has_key(x)" again. Everything is tied together! --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 5 21:24:19 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 05 Feb 2001 15:24:19 -0500 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: Your message of "Mon, 05 Feb 2001 11:04:22 PST." <20010205110422.A5893@glacier.fnational.com> References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> <200102051837.NAA00833@cj20424-a.reston1.va.home.com> <20010205110422.A5893@glacier.fnational.com> Message-ID: <200102052024.PAA05474@cj20424-a.reston1.va.home.com> > On Mon, Feb 05, 2001 at 01:37:39PM -0500, Guido van Rossum wrote: > > Now, can you do things like this: > [example cut] > > No, it would have to be written like this: > > >>> from types import * > >>> class MyInt(IntType): # add a method > def add1(self): return self.value+1 > > >>> i = MyInt(10) > >>> i.add1() > 11 > >>> > > Note the value attribute. The IntType.__init__ method is > basicly: > > def __init__(self, value): > self.value = value So, "class MyInt(IntType)" acts as a sort-of automagical "UserInt" class creation? (Analogous to UserList etc.) I'm not sure I like that. Why do we have to have this? --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Mon Feb 5 21:29:43 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 15:29:43 -0500 Subject: [Python-Dev] Heads up on library reorganization In-Reply-To: <20010205134230.A25426@thyrsus.com> Message-ID: [Eric S. Raymond] > ... > Guido, of course, is still up to his ears in post-LWE mail > and work cleanup. Bad news, but temporary news: The PythonLabs group (incl. Guido) is going to be severely out of touch for the rest of this week, starting at varying times today. So we'll have another giant pile of email to deal with over the weekend, on top of the giant pile left unanswered during the release crunch. (Ping, I'm not ignoring your PEP, I simply haven't gotten to it yet! looks like I won't this week either) So if anyone has been waiting for a chance to pull off a hostile takeover of Python, this is the week! carpe-diem-ly y'rs - tim From nas at arctrix.com Mon Feb 5 21:48:10 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 5 Feb 2001 12:48:10 -0800 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: <200102052024.PAA05474@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 05, 2001 at 03:24:19PM -0500 References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> <200102051837.NAA00833@cj20424-a.reston1.va.home.com> <20010205110422.A5893@glacier.fnational.com> <200102052024.PAA05474@cj20424-a.reston1.va.home.com> Message-ID: <20010205124810.A6285@glacier.fnational.com> On Mon, Feb 05, 2001 at 03:24:19PM -0500, Guido van Rossum wrote: > So, "class MyInt(IntType)" acts as a sort-of automagical "UserInt" > class creation? (Analogous to UserList etc.) I'm not sure I like > that. Why do we have to have this? The problem is where to store the information in the PyIntObject structure. I don't think my solution is great either. Neil From skip at mojam.com Mon Feb 5 21:51:48 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 14:51:48 -0600 (CST) Subject: [Python-Dev] creating __all__ in extension modules In-Reply-To: <14973.33483.956785.985303@cj42289-a.reston1.va.home.com> References: <14970.60750.570192.452062@beluga.mojam.com> <14972.33928.540016.339352@cj42289-a.reston1.va.home.com> <14972.36408.800070.656541@beluga.mojam.com> <14973.33483.956785.985303@cj42289-a.reston1.va.home.com> Message-ID: <14975.4708.165467.565852@beluga.mojam.com> I retract my suggested C code for building __all__ lists. I'm using Fred's code instead. Skip From skip at mojam.com Mon Feb 5 21:55:41 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 14:55:41 -0600 (CST) Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: References: Message-ID: <14975.4941.974720.155034@beluga.mojam.com> Andrew> Is anything much lost if the summaries cease? Like Eric said, probably not. Still, before tossing them you might post a note to c.l.py.a that is essentially what you wrote and warn that if people don't chime in with some valid feedback, they will stop. Skip From gvwilson at ca.baltimore.com Mon Feb 5 21:57:05 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 5 Feb 2001 15:57:05 -0500 Subject: [Python-Dev] re: for/iter poll In-Reply-To: <20010205192428.5872BE75D@mail.python.org> Message-ID: <002801c08fb6$321d3a50$770a0a0a@nevex.com> I am teaching Python at the Space Telescope Science Institute on Thurs/Fri this week (Feb 8-9). There will be 20+ students in attendance, most of whom will never have seen Python before (although all have previous programming experience). This is a great opportunity to field-test new syntax for iteration, membership tests, and the like, if interested parties can help me put together questions. I have set up a mailing list at: http://groups.yahoo.com/group/python-iter to handle this discussion (since putting together a questionnaire doesn't belong on python-dev). Please join up and send suggestions; we've got the rest of today, Tuesday, and Wednesday morning... Thanks, Greg From fredrik at pythonware.com Mon Feb 5 22:02:42 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Mon, 5 Feb 2001 22:02:42 +0100 Subject: [Python-Dev] re: for in dict (user expectation poll) References: <200102052022.PAA05449@cj20424-a.reston1.va.home.com> Message-ID: <042701c08fb6$fd382970$e46940d5@hagrid> > But then we should review the wisdom of using "if x in dict" as a > shortcut for "if dict.has_key(x)" again. Everything is tied together! yeah, don't forget unpacking assignments: assert len(dict) == 3 { k1:v1, k2:v2, k3:v3 } = dict Cheers /F From tim.one at home.com Mon Feb 5 22:01:49 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 16:01:49 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <20010205122721.J812@dothill.com> Message-ID: [Jason Tishler] > Sorry, but I don't agree with your assessment that Cygwin's treatment > of case is "seemingly random." IMO, Cygwin behaves appropriately > regarding case for a case-insensitive, but case-preserving file system. Sorry, you can't disagree with that : i.e., you can disagree that Cygwin *is* inconsistent, but you can't tell me it didn't *appear* inconsistent to me the first time I played with it. The latter is just a fact. It doesn't mean it *is* inconsistent. First impressions are what they are. The heart of the question for Python is related, though: you say Cygwin behaves appropriately. Fine. If I "cat FiLe", it will cat a file named "file" or "FILE" or "filE" etc. But at the same time, you want Python to *ignore* "filE.py" when Python does "import FiLe". The behavior you want from Python is then inconsistent with what Cygwin does elsewhere. So if Cygwin's behavior is "appropriate" for the filesystem, then what you want Python to do must be "inappropriate" for the filesystem. That's what I want too, but it *is* inappropriate for the filesystem, and I want to be clear about that. Basic sanity requires that Python do the same thing on *all* case-insensitive case-preserving filesystems, to the fullest extent possible. Python's DOS/Windows behavior has priority by a decade. I'm deadly opposed to making a special wart for Cygwin (or the Mac), but am in favor of changing it on Windows too. >> We can't do the reverse. That would lead to explainable rules >> and maximal portability. > Sorry but I don't grok the above. Tim, can you try again? "That" referred to the sentence before the first one you quoted, although it takes psychic powers to intuit that. That is, in the interest of maximal portability, explainability and predictability, import can make case-insensitive filesystems act as if they were case-sensitive, but it's much harder ("we can't") to make C-S systems act C-I. From tim.one at home.com Mon Feb 5 22:07:15 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 16:07:15 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <028701c08f9e$e65886e0$e46940d5@hagrid> Message-ID: [Fredrik Lundh] > umm. what kind of imports are not case-sensitive today? fredrik.py and Fredrik.py, both on the path. On Windows it does or doesn't blow up, depending on which one you import and which one is found first on the path. On Unix it always works. Imports on Windows aren't so much case-sensitive as casenannying . From tim.one at home.com Mon Feb 5 22:11:32 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 16:11:32 -0500 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <042701c08fb6$fd382970$e46940d5@hagrid> Message-ID: [/F] > yeah, don't forget unpacking assignments: > > assert len(dict) == 3 > { k1:v1, k2:v2, k3:v3 } = dict Yuck. I'm going to suppress that. but-thanks-for-pointing-it-out-ly y'rs - tim From skip at mojam.com Mon Feb 5 22:22:21 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 15:22:21 -0600 (CST) Subject: [Python-Dev] PEPS, version control, release intervals Message-ID: <14975.6541.43230.433954@beluga.mojam.com> One thing that I think probably perturbs people is that there is no dot release of Python that is explicitly just a bug fix release. I rather like the odd-even versioning that the Linux kernel community uses where odd minor version numbers are development versions and even minor versions are stable versions. That way, if you're using the 2.2.15 kernel and 2.2.16 comes out you know it only contains bug fixes. On the other hand, when 2.3.1 is released, you know it's a development release. I'm not up on Linux kernel release timeframes, but the development kernels are publically available for quite awhile and receive a good deal of knocking around before being "pronounced" by the Linux BDFL and turned into a stable release. I don't see that currently happening in the Python community. I realize this would complicate maintenance of the Python CVS tree, but I think it may be necessary to give people a longer term sense of stability. Python 1.5.2 was released 4/13/99 and Python 2.0 on 10/16/00 (about 18 months between releases?). 2.1a1 came out 1/18/01 followed by 2.1a2 on 2/1/01 (all dates are from a cvs log of the toplevel README file). The 2.0 release did make some significant changes which have caused people some heartburn. To release 2.1 on 4/1/01 as PEP 226 suggests it will be with more language changes that could cause problems for existing code (weak refs and nested scopes get mentioned all the time) seems a bit fast, especially since the status of two relevant PEPs are "incomplete" and "draft", respectively. The relatively fast cycle time between creation of a PEP and incorporation of the feature into the language, plus the fact that the PEP concept is still relatively new to the Python community (are significant PEP changes announced to the newsgroups?), may be a strong contributing factor to the relatively small amount of feedback they receive and the relatively vocal response the corresponding language changes receive. Skip From sdm7g at virginia.edu Mon Feb 5 22:29:58 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Mon, 5 Feb 2001 16:29:58 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: Message-ID: On Mon, 5 Feb 2001, Tim Peters wrote: > [Fredrik Lundh] > > umm. what kind of imports are not case-sensitive today? > > fredrik.py and Fredrik.py, both on the path. On Windows it does or doesn't > blow up, depending on which one you import and which one is found first on > the path. On Unix it always works. On Unix it always works ... depending on the filesystem. ;-) > Imports on Windows aren't so much > case-sensitive as casenannying . > From akuchlin at cnri.reston.va.us Mon Feb 5 22:45:57 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 5 Feb 2001 16:45:57 -0500 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <14975.6541.43230.433954@beluga.mojam.com>; from skip@mojam.com on Mon, Feb 05, 2001 at 03:22:21PM -0600 References: <14975.6541.43230.433954@beluga.mojam.com> Message-ID: <20010205164557.B990@thrak.cnri.reston.va.us> On Mon, Feb 05, 2001 at 03:22:21PM -0600, Skip Montanaro wrote: >heartburn. To release 2.1 on 4/1/01 as PEP 226 suggests it will be with >more language changes that could cause problems for existing code (weak refs >and nested scopes get mentioned all the time) seems a bit fast, especially >since the status of two relevant PEPs are "incomplete" and "draft", >respectively. Note that making new releases come out more quickly was one of GvR's goals. With frequent releases, much of the motivation for a Linux-style development/production split goes away; new Linux kernels take about 2 years to appear, and in that time people still need to get driver fixes, security updates, and so forth. There seem far fewer things worth fixing in a Python 2.0.1; the wiki contains one critical patch and 5 misc. ones. A more critical issue might be why people haven't adopted 2.0 yet; there seems little reason is there to continue using 1.5.2, yet I still see questions on the XML-SIG, for example, from people who haven't upgraded. Is it that Zope doesn't support it? Or that Red Hat and Debian don't include it? This needs fixing, or else we'll wind up with a community scattered among lots of different versions. (I hope someone is going to include all these issues in the agenda for "Collaborative Devel. Issues" on Developers' Day! They're certainly worth discussing...) --amk From jeremy at alum.mit.edu Mon Feb 5 22:53:00 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 5 Feb 2001 16:53:00 -0500 (EST) Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <20010205164557.B990@thrak.cnri.reston.va.us> References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <14975.8380.909630.483471@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "AMK" == Andrew Kuchling writes: AMK> On Mon, Feb 05, 2001 at 03:22:21PM -0600, Skip Montanaro wrote: >> heartburn. To release 2.1 on 4/1/01 as PEP 226 suggests it will >> be with more language changes that could cause problems for >> existing code (weak refs and nested scopes get mentioned all the >> time) seems a bit fast, especially since the status of two >> relevant PEPs are "incomplete" and "draft", respectively. AMK> Note that making new releases come out more quickly was one of AMK> GvR's goals. With frequent releases, much of the motivation AMK> for a Linux-style development/production split goes away; new AMK> Linux kernels take about 2 years to appear, and in that time AMK> people still need to get driver fixes, security updates, and so AMK> forth. There seem far fewer things worth fixing in a Python AMK> 2.0.1; the wiki contains one critical patch and 5 misc. ones. AMK> A more critical issue might be why people haven't adopted 2.0 AMK> yet; there seems little reason is there to continue using AMK> 1.5.2, yet I still see questions on the XML-SIG, for example, AMK> from people who haven't upgraded. Is it that Zope doesn't AMK> support it? Or that Red Hat and Debian don't include it? This AMK> needs fixing, or else we'll wind up with a community scattered AMK> among lots of different versions. AMK> (I hope someone is going to include all these issues in the AMK> agenda for "Collaborative Devel. Issues" on Developers' Day! AMK> They're certainly worth discussing...) What is the agenda for this session on Developers' Day? Since we're the developers, it would be cool to know in advance. Same question for the Py3K session. It seems to be the right time for figuring out what we need to discuss at DD. Jeremy From akuchlin at cnri.reston.va.us Mon Feb 5 23:01:06 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 5 Feb 2001 17:01:06 -0500 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <14975.8380.909630.483471@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Feb 05, 2001 at 04:53:00PM -0500 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <14975.8380.909630.483471@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010205170106.D990@thrak.cnri.reston.va.us> On Mon, Feb 05, 2001 at 04:53:00PM -0500, Jeremy Hylton wrote: >What is the agenda for this session on Developers' Day? Since we're >the developers, it would be cool to know in advance. Does the session still exist? The brochure lists it as session D2-1, but that's now listed as "Reworking Python's Numeric Model". (I think the Catalog session is pretty useless, would happily change it to be the collab. devel. one, and would be willing to run the new session.) >Same question >for the Py3K session. It seems to be the right time for figuring out >what we need to discuss at DD. And I'm also thinking of putting together a "Python 3000 Considered Harmful" anti-presentation for the Py3K session... which is at the same time as the session I'm responsible for. --amk From esr at thyrsus.com Mon Feb 5 23:03:40 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Mon, 5 Feb 2001 17:03:40 -0500 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010205164557.B990@thrak.cnri.reston.va.us>; from akuchlin@cnri.reston.va.us on Mon, Feb 05, 2001 at 04:45:57PM -0500 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <20010205170340.A3101@thyrsus.com> (Copying Michael Tiemann on this, as he can actually get Red Hat to move...) Andrew Kuchling : > A more critical issue might be why people haven't adopted 2.0 yet; > there seems little reason is there to continue using 1.5.2, yet I > still see questions on the XML-SIG, for example, from people who > haven't upgraded. Is it that Zope doesn't support it? Or that Red > Hat and Debian don't include it? This needs fixing, or else we'll > wind up with a community scattered among lots of different versions. I've investigated this. The state of the Red Hat 7.1 beta seem to be that it will include both 2.0 and 1.5.2; there are separate python and python2 RPMs. This would be OK, but I don't know which version will be called by "/usr/bin/env python". Now hear this, Michael: *it should be 2.0*. It's OK to have 1.5.2 available as a compatibility hedge, but I haven't actually heard of any compatibility problems in code not specifically designed to probe for them. And there are several excellent reasons to push forward, beginning with garbage collection. Please make sure the default Python in 7.1 is Python 2. Among other things, this will significantly help CML2 adoption, which I know you are interested in. -- Eric S. Raymond No kingdom can be secured otherwise than by arming the people. The possession of arms is the distinction between a freeman and a slave. -- "Political Disquisitions", a British republican tract of 1774-1775 From mal at lemburg.com Mon Feb 5 23:07:44 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 23:07:44 +0100 Subject: [Python-Dev] PEPS, version control, release intervals References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <3A7F2430.302FF430@lemburg.com> Andrew Kuchling wrote: > > A more critical issue might be why people haven't adopted 2.0 yet; > there seems little reason is there to continue using 1.5.2, yet I > still see questions on the XML-SIG, for example, from people who > haven't upgraded. Is it that Zope doesn't support it? Or that Red > Hat and Debian don't include it? This needs fixing, or else we'll > wind up with a community scattered among lots of different versions. From sdm7g at virginia.edu Mon Feb 5 23:19:02 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Mon, 5 Feb 2001 17:19:02 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: Message-ID: On Sun, 4 Feb 2001, Tim Peters wrote: > Well, MacOSX-on-non-HFS+ *is* Unix, right? So that should take care of > itself (ya, right). I don't understand what Cygwin does; here from a Cygwin > bash shell session: > > tim at fluffy ~ > $ touch abc > > tim at fluffy ~ > $ touch ABC > > tim at fluffy ~ > $ ls > abc > > tim at fluffy ~ > $ wc AbC > 0 0 0 AbC > > tim at fluffy ~ > $ ls A* > ls: A*: No such file or directory > > tim at fluffy ~ > > So best I can tell, they're like Steven: working with a case-insensitive > filesystem but trying to make Python insist that it's not, and what basic > tools there do about case is seemingly random (wc doesn't care, shell > expansion does, touch doesn't, rm doesn't (not shown) -- maybe it's just > shell expansion that's trying to pretend this is Unix? oh ya, shell > expansion and Python import -- *that's* a natural pair ). > Just for the record, I get exactly the same results on macosx as you did on Cygwin. The logic behind the seemingly random results is, I'm sure, the same logic behind my patch: accessing the file itself is case insensitive; but the directory entry (accessed by shell globbing) is case preserving. -- Steve Majewski From mal at lemburg.com Mon Feb 5 23:36:55 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Mon, 05 Feb 2001 23:36:55 +0100 Subject: [Python-Dev] Iterators (PEP 234) (re: Sets BOF / for in dict) References: Message-ID: <3A7F2B07.2D0D1460@lemburg.com> Ka-Ping Yee wrote: > > On Mon, 5 Feb 2001, M.-A. Lemburg wrote: > > Slices and dictionary enclose the two parts in brackets -- this > > places the colon into a visible context. for ... in ... : does > > not provide much of a context. > > For crying out loud! '\':' requires that you tokenize the string > before you know that the colon is part of the string. Triple-quotes > force you to tokenize carefully too. There is *nothing* that this > stay-away-from-colons argument buys you. > > For a human skimming over source code -- i repeat, the visual hint > is "colon at the END of a line". Oh well, perhaps you are right and we should call things like key:value association and be done with it. > > > Because there's no good answer for "what does iterator() return?" > > > in this design. (Trust me; i did think this through carefully.) > > > Try it. How would you implement the iterator() method? > > > > The .iterator() method would have to return an object which > > provides an iterator API (at C level to get the best performance). > > Okay, provide an example. Write this iterator() method in Python. > Now answer: how does 'for' know whether the thing to the right of > 'in' is an iterator or a sequence? Simple: have the for-loop test for a type slot and have it fallback to __getitem__ in case it doesn't find the slot API. > > For dictionaries, this object could carry the needed state > > (current position in the dictionary table) and use the PyDict_Next() > > for the internals. Matrices would have to carry along more state > > (one integer per dimension) and could access the internal > > matrix representation directly using C functions. > > This is already exactly what the PEP proposes for the implementation > of sq_iter. Sorry, Ping, I didn't know you have a PEP for iterators already. ...reading it... > > This would give us: speed, flexibility and extensibility > > which the syntax hacks cannot provide; > > The PEP is not just about syntax hacks. It's an iterator protocol. > It's clear that you haven't read it. > > *PLEASE* read the PEP before continuing to discuss it. I quote: > > | Rationale > | > | If all the parts of the proposal are included, this addresses many > | concerns in a consistent and flexible fashion. Among its chief > | virtues are the following three -- no, four -- no, five -- points: > | > | 1. It provides an extensible iterator interface. > | > | 2. It resolves the endless "i indexing sequence" debate. > | > | 3. It allows performance enhancements to dictionary iteration. > | > | 4. It allows one to provide an interface for just iteration > | without pretending to provide random access to elements. > | > | 5. It is backward-compatible with all existing user-defined > | classes and extension objects that emulate sequences and > | mappings, even mappings that only implement a subset of > | {__getitem__, keys, values, items}. > > I can take out the Monty Python jokes if you want. I can add more > jokes if that will make you read it. Just read it, i beg you. Done. Didn't know it exists, though (why isn't the PEP# in the subject line ?). Even after reading it, I still don't get the idea behind adding "Mapping Iterators" and "Sequence Iterators" when both of these are only special implementations of the single "Iterator" interface. Since the object can have multiple methods to construct iterators, all you need is *one* iterator API. You don't need a slot which returns an iterator object -- leave that decision to the programmer, e.g. you can have: for key in dict.xkeys(): for value in dict.xvalues(): for items in dict.xitems(): for entry in matrix.xrow(1): for entry in matrix.xcolumn(2): for entry in matrix.xdiag(): for i,element in sequence.xrange(): All of these method calls return special iterators for one specific task and all of them provide a slot which is callable without argument and yields the next element of the iteration. Iteration is terminated by raising an IndexError just like with __getitem__. Since for-loops can check for the type slot, they can use an optimized implementation which avoids the creation of temporary integer objects and leave the state-keeping to the iterator which can usually provide a C based storage for it with much better performance. Note that with this kind of interface, there is no need to add "Mapping Iterators" or "Sequence Iterators" as special cases, since these are easily implemented using the above iterators. > > e.g. how would you > > specify to iterate backwards over a sequence using that notation > > or diagonal for a matrix ? > > No differently from what you are suggesting, at the surface: > > for item in sequence.backwards(): > for item in matrix.diagonal(): > > The difference is that the thing on the right of 'in' is always > considered a sequence-like object. There is no ambiguity and > no magic rule for deciding when it's a sequence and when it's > an iterator. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From skip at mojam.com Mon Feb 5 23:42:04 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 16:42:04 -0600 (CST) Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <20010205164557.B990@thrak.cnri.reston.va.us> References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <14975.11324.787920.766932@beluga.mojam.com> amk> A more critical issue might be why people haven't adopted 2.0 yet; amk> there seems little reason is there to continue using 1.5.2/// For all the messing around I do on the CVS version, I still use 1.5.2 on my web servers precisely because I don't have the time or gumption to "fix" the code that needs to run. That's not just my code, but also the ZServer and DocumentTemplate code from Zope. Skip From skip at mojam.com Mon Feb 5 23:44:19 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 16:44:19 -0600 (CST) Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <20010205164557.B990@thrak.cnri.reston.va.us> References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <14975.11459.976381.345964@beluga.mojam.com> amk> Note that making new releases come out more quickly was one of amk> GvR's goals. With frequent releases, much of the motivation for a amk> Linux-style development/production split goes away; I don't think that's necessarily true. If a new release comes out every six months and always requires you to check for breakage of previously working code, what's the chance you're going to be anxious to upgrade? Pretty low I would think. Skip From tim.one at home.com Tue Feb 6 01:22:20 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 19:22:20 -0500 Subject: [Python-Dev] Funny! Message-ID: Go to http://www.askjesus.org/ and enter www.python.org in the box. Grail is -- listen to Jesus when he's talking to you -- an extensible Tower of Babel browser writteneth entirely in the interpreted object-oriented programming babel Python. It runs upon Unix, and, to some extent, upon Windows and Macintosh. Grail is with GOD's help extended to support immaculately conceived protocols or file formats. oddly-enough-the-tabnanny-docs-weren't-altered-at-all-ly y'rs - tim From skip at mojam.com Tue Feb 6 01:57:27 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 5 Feb 2001 18:57:27 -0600 (CST) Subject: [Python-Dev] test_minidom failing on linux Message-ID: <14975.19447.698806.586210@beluga.mojam.com> test_minidom failed on my linux system just now. I tried another cvs update but no files were updated. Did someone forget to check in a new expected output file? Skip From moshez at zadka.site.co.il Tue Feb 6 02:53:26 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Tue, 6 Feb 2001 03:53:26 +0200 (IST) Subject: [Python-Dev] Alternative to os.system that takes a list of strings? In-Reply-To: <01d001c08f81$ec4d83b0$0900a8c0@SPIFF> References: <01d001c08f81$ec4d83b0$0900a8c0@SPIFF>, <200102051430.JAA17890@w20-575-36.mit.edu> <200102051434.JAA31491@cj20424-a.reston1.va.home.com> Message-ID: <20010206015326.46228A841@darjeeling.zadka.site.co.il> On Mon, 5 Feb 2001, "Fredrik Lundh" wrote: > > BTW, what do you mean by "upstream"? > > looks like freebsd lingo: the original maintainer of a > piece of software (outside the bsd universe). Also Debian lingo for same. -- Moshe Zadka This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From moshez at zadka.site.co.il Tue Feb 6 03:04:05 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Tue, 6 Feb 2001 04:04:05 +0200 (IST) Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: References: Message-ID: <20010206020405.58D03A840@darjeeling.zadka.site.co.il> On Mon, 05 Feb 2001, Andrew Kuchling wrote: > One thing about the reaction to the 2.1 alphas is that many people > seem *surprised* by some of the changes, even though PEPs have been > written, discussed, and mentioned in python-dev summaries. Maybe the > PEPs and their status need to be given higher visibility; I'd suggest > sending a brief note of status changes (new draft PEPs, acceptance, > rejection) to comp.lang.python.announce. I'm +1 on that. c.l.p.a isn't really a high-traffic group, and this would add negligible traffic in any case. Probably more important then stuff I approve daily. > Also, I'm wondering if it's worth continuing the python-dev summaries, > because, while they get a bunch of hits on news sites such as Linux > Today and may be good PR, I'm not sure that they actually help Python > development. They're supposed to let people offer timely comments on > python-dev discussions while it's still early enough to do some good, > but that doesn't seem to happen; I don't see python-dev postings that > began with something like "The last summary mentioned you were talking > about X. I use X a lot, and here's what I think: ...". Is anything > much lost if the summaries cease? One note: if you're asking for lack of time, I can help: I'm doing the Python-URL! summaries for a few weeks now, and I've gotten some practice. FWIW, I think they are excellent. Maybe crosspost to c.l.py too, so it can get discussed on the group more easily? -- Moshe Zadka This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From moshez at zadka.site.co.il Tue Feb 6 03:11:20 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Tue, 6 Feb 2001 04:11:20 +0200 (IST) Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: <20010205141139.K733@thrak.cnri.reston.va.us> References: <20010205141139.K733@thrak.cnri.reston.va.us>, <3A7EF1A0.EDA4AD24@lemburg.com> Message-ID: <20010206021120.66A16A840@darjeeling.zadka.site.co.il> On Mon, 5 Feb 2001, Andrew Kuchling wrote: > * Try doing some PR for 2.1. OK, no one is going to enjoy hearing this, and I know this has been hashed to death, but the major stumbling block for PR for 2.0 was GPL-compat. I know everyone is doing their best to resolve this problem, and my heart felt thanks to them for doing this thankless job. Mostly, PR for 2.1 consists of writing our code using the 2.1 wonderful constructs (os.spawnv, for example, which is now x-p). I know I'd do that more easily if I knew 'apt-get install python' would let people use my code. -- Moshe Zadka This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From tim.one at home.com Tue Feb 6 03:26:26 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 5 Feb 2001 21:26:26 -0500 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <20010205170106.D990@thrak.cnri.reston.va.us> Message-ID: [resending because it never showed up in the Python-Dev archives, & this is my last decent chance to do email this week ] [Jeremy Hylton] > What is the agenda for this session on Developers' Day? Since we're > the developers, it would be cool to know in advance. [Andrew Kuchling] > Does the session still exist? The brochure lists it as session D2-1, > but that's now listed as "Reworking Python's Numeric Model". I think that's right. I "volunteered" to endure numeric complaints, as there are at least a dozen contentious proposals in that area (from rigid 754 support to extensible literal notation for, e.g., users who hate stuffing rationals or gmp numbers or fixed-point decimals in strings; we could fill a whole day without even mentioning what 1/2 does!). Then, since collaborative development ceased being a topic on Python-Dev (been a long time since somebody brought that up here, other than to gripe about the SourceForge bug-du-jour or that Guido *still* doesn't accept every proposal ), the prospects for having an interesting session on that appeared dim. Maybe that was wrong; otoh, Jeremy just now failed to think of a relevant issue on his own . > And I'm also thinking of putting together a "Python 3000 Considered > Harmful" anti-presentation for the Py3K session... which is at the > same time as the session I'm responsible for. Don't tell anyone, but 2.1 *is* Python 3000 -- or as much of it as will be folded in for 2.1 <0.3 wink>. About people not moving to 2.0, the single specific reason I hear most often hinges on presumed lack of GPL compatibility. But then people worried about that *have* a specific reason stopping them. For everyone else, I know sysadmins who still refuse to move up from Perl 4. BTW, we recorded thousands of downloads of 2.0 betas at BeOpen.com, and indeed more than 10,000 of the Windows installer alone. Then their download stats broke. SF's have been broken for a long time. So while we have no idea how many people are downloading now, the idea that people stayed away from 2.0 in droves is wrong. And 2.0-specific examples are common on c.l.py now from lots of people too. only-developers-are-in-a-rush-ly y'rs - tim From fredrik at effbot.org Tue Feb 6 04:58:48 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 6 Feb 2001 04:58:48 +0100 Subject: [Python-Dev] PEP announcements, and summaries References: <20010206020405.58D03A840@darjeeling.zadka.site.co.il> Message-ID: <00ce01c08ff1$1f03b1c0$e46940d5@hagrid> moshe wrote: > FWIW, I think they are excellent. agreed. > Maybe crosspost to c.l.py too, so it can get discussed > on the group more easily? +1 Cheers /F From nas at arctrix.com Tue Feb 6 05:56:12 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 5 Feb 2001 20:56:12 -0800 Subject: [Python-Dev] Setup.local is getting zapped In-Reply-To: <200102032110.QAA13074@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sat, Feb 03, 2001 at 04:10:56PM -0500 References: <14971.26729.54529.333522@beluga.mojam.com> <14972.7656.829356.566021@beluga.mojam.com> <20010203092124.A30977@glacier.fnational.com> <200102032040.PAA04977@mercur.uphs.upenn.edu> <00c401c08e23$96b44510$e46940d5@hagrid> <200102032110.QAA13074@cj20424-a.reston1.va.home.com> Message-ID: <20010205205612.A7074@glacier.fnational.com> On Sat, Feb 03, 2001 at 04:10:56PM -0500, Guido van Rossum wrote: > Effbot wrote: > > why not just keep the old behaviour? > Agreed. Unless there's a GNU guideline somewhere. A few points: If typing make does not correctly rebuild the target then I consider it a bug with the makefile. Of course, this excludes things like upgrading the system between compiles. In that case, you should remove the config.cache file and re-run configure. Also, I'm uneasy about the makefile removing things it didn't create. I would be annoyed if I backed up a file using a .bak extension only to realize that "make clean" blew it away. Why does "clean" have to remove this stuff? Perhaps it would be useful if you explain the logic behind the old targets. Here is my rational: clean: Remove object files. They take up a bit of space. It will also force all .c files to be recompiled next time make is run. Remove compiled Python code as well. Maybe the interpreter has changed but the magic has not. clobber: Remove libraries as well. Maybe Setup or setup.py has been changed and I don't want some of the old shared libraries. distclean: Remove everything that might pollute a source distribution. Looking at this again I think the cleaning of configure stuff should be moved to clobber. OTOH, I have no problems with making the clean targets behave similarily to the ones in 2.0 if that's what people want. Neil From paulp at ActiveState.com Tue Feb 6 06:49:56 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Mon, 05 Feb 2001 21:49:56 -0800 Subject: [Python-Dev] Pre-PEP: Python Character Model Message-ID: <3A7F9084.509510B8@ActiveState.com> I went to a very interesting talk about internationalization by Tim Bray, one of the editors of the XML spec and a real expert on i18n. It inspired me to wrestle one more time with the architectural issues in Python that are preventing us from saying that it is a really internationalized language. Those geek cruises aren't just about sun, surf and sand. There's a pretty high level of intellectual give and take also! Email me for more info... Anyhow, we deferred many of these issues (probably out of exhaustion) the last time we talked about it but we cannot and should not do so forever. In particular, I do not think that we should add more features for working with Unicode (e.g. unichr) before thinking through the issues. ----- Abstract Many of the world's written languages have more than 255 characters. Therefore Python is out of date in its insistence that "basic strings" are lists of characters with ordinals between 0 and 255. Python's basic character type must allow at least enough digits for Eastern languages. Problem Description Python's western bias stems from a variety of issues. The first problem is that Python's native character type is an 8-bit character. You can see that it is an 8-bit character by trying to insert a value with an ordinal higher than 255. Python should allow for ordinal numbers up to at least the size of a single Eastern language such as Chinese or Japanese. Whenever a Python file object is "read", it returns one of these lists of 8-byte characters. The standard file object "read" method can never return a list of Chinese or Japanese characters. This is an unacceptable state of affairs in the 21st century. Goals 1. Python should have a single string type. It should support Eastern characters as well as it does European characters. Operationally speaking: type("") == type(chr(150)) == type(chr(1500)) == type(file.read()) 2. It should be easier and more efficient to encode and decode information being sent to and retrieved from devices. 3. It should remain possible to work with the byte-level representation. This is sometimes useful for for performance reasons. Definitions Character Set A character set is a mapping from integers to characters. Note that both integers and characters are abstractions. In other words, a decision to use a particular character set does not in any way mandate a particular implementation or representation for characters. In Python terms, a character set can be thought of as no more or less than a pair of functions: ord() and chr(). ASCII, for instance, is a pair of functions defined only for 0 through 127 and ISO Latin 1 is defined only for 0 through 255. Character sets typically also define a mapping from characters to names of those characters in some natural language (often English) and to a simple graphical representation that native language speakers would recognize. It is not possible to have a concept of "character" without having a character set. After all, characters must be chosen from some repertoire and there must be a mapping from characters to integers (defined by ord). Character Encoding A character encoding is a mechanism for representing characters in terms of bits. Character encodings are only relevant when information is passed from Python to some system that works with the characters in terms of representation rather than abstraction. Just as a Python programmer would not care about the representation of a long integer, they should not care about the representation of a string. Understanding the distinction between an abstract character and its bit level representation is essential to understanding this Python character model. A Python programmer does not need to know or care whether a long integer is represented as twos complement, ones complement or in terms of ASCII digits. Similarly a Python programmer does not need to know or care how characters are represented in memory. We might even change the representation over time to achieve higher performance. Universal Character Set There is only one standardized international character set that allows for mixed-language information. It is called the Universal Character Set and it is logically defined for characters 0 through 2^32 but practically is deployed for characters 0 through 2^16. The Universal Character Set is an international standard in the sense that it is standardized by ISO and has the force of law in international agreements. A popular subset of the Universal Character Set is called Unicode. The most popular subset of Unicode is called the "Unicode Basic Multilingual Plane (Unicode BMP)". The Unicode BMP has space for all of the world's major languages including Chinese, Korean, Japanese and Vietnamese. There are 2^16 characters in the Unicode BMP. The Unicode BMP subset of UCS is becoming a defacto standard on the Web. In any modern browser you can create an HTML or XML document with Ä­ and get back a rendered version of Unicode character 301. In other words, Unicode is becoming the defato character set for the Internet in addition to being the officially mandated character set for international commerce. In addition to defining ord() and chr(), Unicode provides a database of information about characters. Each character has an english language name, a classification (letter, number, etc.) a "demonstration" glyph and so forth. The Unicode Contraversy Unicode is not entirely uncontroversial. In particular there are Japanese speakers who dislike the way Unicode merges characters from various languages that were considered "the same" by the experts that defined the specification. Nevertheless Unicode is in used as the character set for important Japanese software such as the two most popular word processors, Ichitaro and Microsoft Word. Other programming languages have also moved to use Unicode as the basic character set instead of ASCII or ISO Latin 1. From memory, I believe that this is the case for: Java Perl JavaScript Visual Basic TCL XML is also Unicode based. Note that the difference between all of these languages and Python is that Unicode is the *basic* character type. Even when you type ASCII literals, they are immediately converted to Unicode. It is the author's belief this "running code" is evidence of Unicode's practical applicability. Arguments against it seem more rooted in theory than in practical problems. On the other hand, this belief is informed by those who have done heavy work with Asian characters and not based on my own direct experience. Python Character Set As discussed before, Python's native character set happens to consist of exactly 255 characters. If we increase the size of Python's character set, no existing code would break and there would be no cost in functionality. Given that Unicode is a standard character set and it is richer than that of Python's, Python should move to that character set. Once Python moves to that character set it will no longer be necessary to have a distinction between "Unicode string" and "regular string." This means that Unicode literals and escape codes can also be merged with ordinary literals and escape codes. unichr can be merged with chr. Character Strings and Byte Arrays Two of the most common constructs in computer science are strings of characters and strings of bytes. A string of bytes can be represented as a string of characters between 0 and 255. Therefore the only reason to have a distinction between Unicode strings and byte strings is for implementation simplicity and performance purposes. This distinction should only be made visible to the average Python programmer in rare circumstances. Advanced Python programmers will sometimes care about true "byte strings". They will sometimes want to build and parse information according to its representation instead of its abstract form. This should be done with byte arrays. It should be possible to read bytes from and write bytes to arrays. It should also be possible to use regular expressions on byte arrays. Character Encodings for I/O Information is typically read from devices such as file systems and network cards one byte at a time. Unicode BMP characters can have values up to 2^16 (or even higher, if you include all of UCS). There is a fundamental disconnect there. Each character cannot be represented as a single byte anymore. To solve this problem, there are several "encodings" for large characters that describe how to represent them as series of bytes. Unfortunately, there is not one, single, dominant encoding. There are at least a dozen popular ones including ASCII (which supports only 0-127), ISO Latin 1 (which supports only 0-255), others in the ISO "extended ASCII" family (which support different European scripts), UTF-8 (used heavily in C programs and on Unix), UTF-16 (preferred by Java and Windows), Shift-JIS (preferred in Japan) and so forth. This means that the only safe way to read data from a file into Python strings is to specify the encoding explicitly. Python's current assumption is that each byte translates into a character of the same ordinal. This is only true for "ISO Latin 1". Python should require the user to specify this explicitly instead. Any code that does I/O should be changed to require the user to specify the encoding that the I/O should use. It is the opinion of the author that there should be no default encoding at all. If you want to read ASCII text, you should specify ASCII explicitly. If you want to read ISO Latin 1, you should specify it explicitly. Once data is read into Python objects the original encoding is irrelevant. This is similar to reading an integer from a binary file, an ASCII file or a packed decimal file. The original bits and bytes representation of the integer is disconnected from the abstract representation of the integer object. Proposed I/O API This encoding could be chosen at various levels. In some applications it may make sense to specify the encoding on every read or write as an extra argument to the read and write methods. In most applications it makes more sense to attach that information to the file object as an attribute and have the read and write methods default the encoding to the property value. This attribute value could be initially set as an extra argument to the "open" function. Here is some Python code demonstrating a proposed API: fileobj = fopen("foo", "r", "ASCII") # only accepts values < 128 fileobj2 = fopen("bar", "r", "ISO Latin 1") # byte-values "as is" fileobj3 = fopen("baz", "r", "UTF-8") fileobj2.encoding = "UTF-16" # changed my mind! data = fileobj2.read(1024, "UTF-8" ) # changed my mind again For efficiency, it should also be possible to read raw bytes into a memory buffer without doing any interpretation: moredata = fileobj2.readbytes(1024) This will generate a byte array, not a character string. This is logically equivalent to reading the file as "ISO Latin 1" (which happens to map bytes to characters with the same ordinals) and generating a byte array by copying characters to bytes but it is much more efficient. Python File Encoding It should be possible to create Python files in any of the common encodings that are backwards compatible with ASCII. This includes ASCII itself, all language-specific "extended ASCII" variants (e.g. ISO Latin 1), Shift-JIS and UTF-8 which can actually encode any UCS character value. The precise variant of "super-ASCII" must be declared with a specialized comment that precedes any other lines other than the shebang line if present. It has a syntax like this: #?encoding="UTF-8" #?encoding="ISO-8859-1" ... #?encoding="ISO-8859-9" #?encoding="Shift_JIS" For now, this is the complete list of legal encodings. Others may be added in the future. Python files which use non-ASCII characters without defining an encoding should be immediately deprecated and made illegal in some future version of Python. C APIs The only time representation matters is when data is being moved from Python's internal model to something outside of Python's control or vice versa. Reading and writing from a device is a special case discussed above. Sending information from Python to C code is also an issue. Python already has a rule that allows the automatic conversion of characters up to 255 into their C equivalents. Once the Python character type is expanded, characters outside of that range should trigger an exception (just as converting a large long integer to a C int triggers an exception). Some might claim it is inappropriate to presume that the character-for- byte mapping is the correct "encoding" for information passing from Python to C. It is best not to think of it as an encoding. It is merely the most straightforward mapping from a Python type to a C type. In addition to being straightforward, I claim it is the best thing for several reasons: * It is what Python already does with string objects (but not Unicode objects). * Once I/O is handled "properly", (see above) it should be extremely rare to have characters in strings above 128 that mean anything OTHER than character values. Binary data should go into byte arrays. * It preserves the length of the string so that the length C sees is the same as the length Python sees. * It does not require us to make an arbitrary choice of UTF-8 versus UTF-16. * It means that C extensions can be internationalized by switching from C's char type to a wchar_t and switching from the string format code to the Unicode format code. Python's built-in modules should migrate from char to wchar_t (aka Py_UNICODE) over time. That is, more and more functions should support characters greater than 255 over time. Rough Implementation Requirements Combine String and Unicode Types: The StringType and UnicodeType objects should be aliases for the same object. All PyString_* and PyUnicode_* functions should work with objects of this type. Remove Unicode String Literals Ordinary string literals should allow large character escape codes and generate Unicode string objects. Unicode objects should "repr" themselves as Python string objects. Unicode string literals should be deprecated. Generalize C-level Unicode conversion The format string "S" and the PyString_AsString functions should accept Unicode values and convert them to character arrays by converting each value to its equivalent byte-value. Values greater than 255 should generate an exception. New function: fopen fopen should be like Python's current open function except that it should allow and require an encoding parameter. The file objects returned by it should be encoding aware. fopen should be considered a replacement for open. open should eventually be deprecated. Add byte arrays The regular expression library should be generalized to handle byte arrays without converting them to Python strings. This will allow those who need to work with bytes to do so more efficiently. In general, it should be possible to use byte arrays where-ever it is possible to use strings. Byte arrays could be thought of as a special kind of "limited but efficient" string. Arguably we could go so far as to call them "byte strings" and reuse Python's current string implementation. The primary differences would be in their "repr", "type" and literal syntax. In a sense we would have kept the existing distinction between Unicode strings and 8-bit strings but made Unicode the "default" and provided 8-bit strings as an efficient alternative. Appendix: Using Non-Unicode character sets Let's presume that a linguistics researcher objected to the unification of Han characters in Unicode and wanted to invent a character set that included separate characters for all Chinese, Japanese and Korean character sets. Perhaps they also want to support some non-standard character set like Klingon. Klingon is actually scheduled to become part of Unicode eventually but let's presume it wasn't. This section will demonstrate that this researcher is no worse off under the new system than they were under historical Python. Adopting Unicode as a standard has no down-side for someone in this situation. They have several options under the new system: 1. Ignore Unicode Read in the bytes using the encoding "RAW" which would mean that each byte would be translated into a character between 0 and 255. It would be a synonym for ISO Latin 1. Now you can process the data using exactly the same Python code that you would have used in Python 1.5 through Python 2.0. The only difference is that the in-memory representation of the data MIGHT be less space efficient because Unicode characters MIGHT be implemented internally as 16 or 32 bit integers. This solution is the simplest and easiest to code. 2. Use Byte Arrays As dicussed earlier, a byte array is like a string where the characters are restricted to characters between 0 and 255. The only virtues of byte arrays are that they enforce this rule and they can be implemented in a more memory-efficient manner. According to the proposal, it should be possible to load data into a byte array (or "byte string") using the "readbytes" method. This solution is the most efficient. 3. Use Unicode's Private Use Area (PUA) Unicode is an extensible standard. There are certain character codes reserved for private use between consenting parties. You could map characters like Klingon or certain Korean ideographs into the private use area. Obviously the Unicode character database would not have meaningful information about these characters and rendering systems would not know how to render them. But this situation is no worse than in today's Python. There is no character database for arbitrary character sets and there is no automatic way to render them. One limitation to this issue is that the Private Use Area can only handle so many characters. The BMP PUA can hold thousands and if we step up to "full" Unicode support we have room for hundreds of thousands. This solution gets the maximum benefit from Unicode for the characters that are defined by Unicode without losing the ability to refer to characters outside of Unicode. 4. Use A Higher Level Encoding You could wrap Korean characters in ... tags. You could describe a characters as \KLINGON-KAHK (i.e. 13 Unicode characters). You could use a special Unicode character as an "escape flag" to say that the next character should be interpreted specially. This solution is the most self-descriptive and extensible. In summary, expanding Python's character type to support Unicode characters does not restrict even the most estoric, Unicode-hostile types of text processing. Therefore there is no basis for objecting to Unicode as some form of restriction. Those who need to use another logial character set have as much ability to do so as they always have. Conclusion Python needs to support international characters. The "ASCII" of internationalized characters is Unicode. Most other languages have moved or are moving their basic character and string types to support Unicode. Python should also. From moshez at zadka.site.co.il Tue Feb 6 09:48:15 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Tue, 6 Feb 2001 10:48:15 +0200 (IST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010205170340.A3101@thyrsus.com> References: <20010205170340.A3101@thyrsus.com>, <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <20010206084815.E63E5A840@darjeeling.zadka.site.co.il> On Mon, 5 Feb 2001, "Eric S. Raymond" wrote: > (Copying Michael Tiemann on this, as he can actually get Red Hat to move...) Copying to debian-python, since it's an important issue there too... > I've investigated this. The state of the Red Hat 7.1 beta seem to be > that it will include both 2.0 and 1.5.2; there are separate python and > python2 RPMs. This would be OK, but I don't know which version will be > called by "/usr/bin/env python". That's how woody works now, and the binaries are called python and python2. Note that they are not managed by the alternatives mechanism -- Joey Hess explained the bad experience perl had with that. I think it's thought of as a temporary issue, and the long-term solution would be to move to Python 2.1. Not sure what all the packages who install in /usr/lib/python1.5 are going to do about it. I'm prepared to adopt htmlgen and python-imaging to convert them if it's needed. -- Moshe Zadka This is a signature anti-virus. Please stop the spread of signature viruses! Fingerprint: 4BD1 7705 EEC0 260A 7F21 4817 C7FC A636 46D0 1BD6 From ping at lfw.org Tue Feb 6 10:11:31 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 01:11:31 -0800 (PST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <042701c08fb6$fd382970$e46940d5@hagrid> Message-ID: On Mon, 5 Feb 2001, Fredrik Lundh wrote: > yeah, don't forget unpacking assignments: > > assert len(dict) == 3 > { k1:v1, k2:v2, k3:v3 } = dict I think this is a total non-issue for the following reasons: 1. Recall the original philosophy behind the list/tuple split. Lists and dicts are usually variable-length homogeneous structures, and therefore it makes sense for them to be mutable. Tuples are usually fixed-length heterogeneous structures, and so it makes sense for them to be immutable and unpackable. 2. In all the Python programs i've ever seen or written, i've never known or expected a dictionary to have a particular fixed length. 3. Since the items come back in random order, there's no point in binding individual ones to individual variables. It's only ever useful to iterate over the key/value pairs. In short, i can't see how anyone would ever want to do this. (Sorry for being the straight man, if you were in fact joking...) -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From mal at lemburg.com Tue Feb 6 11:49:00 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 11:49:00 +0100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> Message-ID: <3A7FD69C.1708339C@lemburg.com> [pre-PEP] You have a lot of good points in there (also some inaccuracies) and I agree that Python should move to using Unicode for text data and arrays for binary data. Some things you may be missing though is that Python already has support for a few features you mention, e.g. codecs.open() provide more or less what you have in mind with fopen() and the compiler can already unify Unicode and string literals using the -U command line option. What you don't talk about in the PEP is that Python's stdlib isn't even Unicode aware yet, and whatever unification steps we take, this project will have to preceed it. The problem with making the stdlib Unicode aware is that of deciding which parts deal with text data or binary data -- the code sometimes makes assumptions about the nature of the data and at other times it simply doesn't care. In this light I think you ought to focus Python 3k with your PEP. This will also enable better merging techniques due to the lifting of the type/class difference. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From ping at lfw.org Tue Feb 6 12:04:34 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 03:04:34 -0800 (PST) Subject: [Python-Dev] Iterators (PEP 234) In-Reply-To: <3A7F2B07.2D0D1460@lemburg.com> Message-ID: On 5 Feb 2001, M.-A. Lemburg wrote: > > > The .iterator() method would have to return an object which > > > provides an iterator API (at C level to get the best performance). > > > > Okay, provide an example. Write this iterator() method in Python. > > Now answer: how does 'for' know whether the thing to the right of > > 'in' is an iterator or a sequence? > > Simple: have the for-loop test for a type slot and have > it fallback to __getitem__ in case it doesn't find the slot API. For the third time: write an example, please. It will help a lot. > Sorry, Ping, I didn't know you have a PEP for iterators already. I posted it on this very boutique (i mean, mailing list) a week ago and messages have been going back and forth on its thread since then. On 31 Jan 2001, Ka-Ping Yee wrote: | Okay, i have written a draft PEP that tries to combine the | "elt in dict", custom iterator, and "for k:v" issues into a | coherent proposal. Have a look: | | http://www.lfw.org/python/pep-iterators.txt | http://www.lfw.org/python/pep-iterators.html Okay. I apologize for my impatient tone, as it comes from the expectation that anyone would have read the document before trying to discuss it. I am very happy to get *new* information, the discovery of new errors in my thinking, better and interesting arguments; it's just that it's exasperating to see arguments repeated that were already made, or objections raised that were already carefully thought out and addressed. From now on, i'll stop resisting the urge to paste the text of proposals inline (instead of politely posting just URLs) so you won't miss them. > Done. Didn't know it exists, though (why isn't the PEP# > in the subject line ?). It didn't have a number at the time i posted it. Thank you for updating the subject line. > Since the object can have multiple methods to construct > iterators, all you need is *one* iterator API. You don't > need a slot which returns an iterator object -- leave > that decision to the programmer, e.g. you can have: > > for key in dict.xkeys(): > for value in dict.xvalues(): > for items in dict.xitems(): Three points: 1. We have syntactic support for mapping creation and lookup, and syntactic support for mapping iteration should mirror it. 2. IMHO for key:value in dict: is much easier to read and explain than for (key, value) in dict.xitems(): (Greg? Could you test this claim with a survey question?) To the newcomer, the former is easy to understand at a surface level. The latter exposes the implementation (an implementation that is still there in PEP 234, but that the programmer only has to worry about if they are going deeper and writing custom iteration behaviour). This separates the work of learning into two small, digestible pieces. 3. Furthermore, this still doesn't solve the backward-compatibility problem that PEP 234 takes great care to address! If you write your for-loops for (key, value) in dict.xitems(): then you are screwed if you try to replace dict with any kind of user-implemented dictionary-like replacement (since you'd have to go back and implement the xitems() method on everything). If, in order to maintain compatibility with the existing de-facto dictionary interface, you write your for-loops for (key, value) in dict.items(): then now you are screwed if dict is a built-in dictionary, since items() is supposed to construct a list, not an iterator. > for entry in matrix.xrow(1): > for entry in matrix.xcolumn(2): > for entry in matrix.xdiag(): These are fine, since matrices are not core data types with syntactic support or a de-facto emulation protocol. > for i,element in sequence.xrange(): This is just as bad as the xitems() issue above -- probably worse -- since nobody implements xrange() on sequence-like objects, so now you've broken compatibility with all of those. We want this feature to smoothly extend and work with existing objects with a minimum of rewriting, ideally none. PEP 234 achieves this ideal. > Since for-loops can check for the type slot, they can use an > optimized implementation which avoids the creation of > temporary integer objects and leave the state-keeping to the > iterator which can usually provide a C based storage for it with > much better performance. This statement, i believe, is orthogonal to both proposals. > Note that with this kind of interface, there is no need to > add "Mapping Iterators" or "Sequence Iterators" as special > cases, since these are easily implemented using the above > iterators. I think this really just comes down to one key difference between our points of view here. Correct me if you disagree: You seem to be suggesting that we should only consider a protocol for sequences, whereas PEP 234 talks about both sequences and mappings. I argue that consideration for mappings is worthwhile because: 1. Dictionaries are a built-in type with syntactic and core implementation support. 2. Iteration over dictionaries is very common and should be spelled in an easily understood fashion. 3. Both sequence and mapping protocols are formalized in the core (with PySequenceMethods and PyMappingMethods). 4. Both sequence and mapping protocols are documented and used in Python (__getitem__, keys, values, etc.). 5. There are many, many sequence-like and mapping-like objects out there, implemented both in Python and in C, which adhere to these protocols. (There is also the not-insignificant side benefit of finally having a decent way to get the indices while you're iterating over a sequence, which i've wanted fairly often.) -- ?!ng From ping at lfw.org Tue Feb 6 12:32:27 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 03:32:27 -0800 (PST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <200102052022.PAA05449@cj20424-a.reston1.va.home.com> Message-ID: On Mon, 5 Feb 2001, Guido van Rossum wrote: > > [Ping] > > I think your survey shows that the PEP made the right choices. > > That is, it supports the position that if 'for key:value' is > > supported, then 'for key:' and 'for :value' should be supported, > > but 'for x in dict:' should not. It also shows that 'for index:' > > should be supported on sequences, which the PEP suggests. > > But then we should review the wisdom of using "if x in dict" as a > shortcut for "if dict.has_key(x)" again. Everything is tied together! Okay. Here's the philosophy; i'll describe my thinking more explicitly. Presumably we can all agree that if you ask to iterate over things "in" a sequence, you clearly want the items in the sequence, not their integer indices. You care about the data *you* put in the container. In the case of a list, you care about the items more than these additional integers that got supplied as a result of using an ordered data structure. So the meaning of for thing in sequence: is pretty clear. The meaning of for thing in mapping: is less clear, since both the keys and the values are interesting data to you. If i ask you to "get me all the things in the dictionary", it's not so obvious whether you should get me a list of just the words, just the definitions, or both (probably both, i suppose). But, if i ask you to "find 'aardvark' in the dictionary" or i ask you "is 'aardvark' in the dictionary?" it's completely obvious what i mean. "if key in dict:" makes sense both by this analogy to common use, and by an argument from efficiency (even the most rudimentary understanding of how a dictionary works is enough to see why we look up keys rather than values). In fact, we *call* it a dictionary because it works like a real dictionary: it's designed for data lookup in one direction, from key to value. "if thing in container" is about *finding* something specific. "for thing in container" is about getting everything out. Now, i know this isn't the strongest argument in the world, and i can see the potential objection that the two aren't consistent, but i think it's a very small thing that only has to be explained once, and then is easy to remember and understand. I consider this little difference less of an issue than the hasattr/has_key inconsistency that it will largely replace. We make expectations clear: for item in sequence: continues to mean, "i expect a sequence", exactly as it does now. When not given a sequence, the 'for' loop complains. Nothing could break, as the interpretation of this loop is unchanged. These three forms: for k:v in anycontainer: for k: in anycontainer: for :v in anycontainer: mean: "i am expecting any indexable thing, where ctr[k] = v". As far as the syntax goes, that's all there is to it: for item in sequence: # only on sequences for k:v in anycontainer: # get keys and values on anything for k: in anycontainer: # just keys for :v in anycontainer: # just values -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From mal at lemburg.com Tue Feb 6 12:54:50 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 12:54:50 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: Message-ID: <3A7FE60A.261CEE6A@lemburg.com> Ka-Ping Yee wrote: > > On 5 Feb 2001, M.-A. Lemburg wrote: > > > > The .iterator() method would have to return an object which > > > > provides an iterator API (at C level to get the best performance). > > > > > > Okay, provide an example. Write this iterator() method in Python. > > > Now answer: how does 'for' know whether the thing to the right of > > > 'in' is an iterator or a sequence? > > > > Simple: have the for-loop test for a type slot and have > > it fallback to __getitem__ in case it doesn't find the slot API. > > For the third time: write an example, please. It will help a lot. Ping, what do you need an example for ? The above sentence says it all: for x in obj: ... This will work as follows: 1. if obj exposes the iteration slot, say tp_nextitem, the for loop will call this slot without argument and assign the returned object to x 2. if obj does not expose tp_nextitem, then the for loop will construct an integer starting at 0 and pass this to the sq_item slot or __getitem__ method and assign the returned value to x; the integer is then replaced with an incremented integer 3. both techniques work until the slot or method in question returns an IndexError exception The current implementation doesn't have 1. This is the only addition it takes to get iterators to work together well with the for-loop -- there are no backward compatibility issues here, because the tp_nextitem slot will be a new one. Since the for-loop can avoid creating temporary integers, iterations will generally run a lot faster than before. Also, iterators have access to the object's internal representation, so data access is also faster. > > Sorry, Ping, I didn't know you have a PEP for iterators already. > > I posted it on this very boutique (i mean, mailing list) a week ago > and messages have been going back and forth on its thread since then. > > On 31 Jan 2001, Ka-Ping Yee wrote: > | Okay, i have written a draft PEP that tries to combine the > | "elt in dict", custom iterator, and "for k:v" issues into a > | coherent proposal. Have a look: > | > | http://www.lfw.org/python/pep-iterators.txt > | http://www.lfw.org/python/pep-iterators.html > > Okay. I apologize for my impatient tone, as it comes from the > expectation that anyone would have read the document before trying > to discuss it. I am very happy to get *new* information, the > discovery of new errors in my thinking, better and interesting > arguments; it's just that it's exasperating to see arguments > repeated that were already made, or objections raised that were > already carefully thought out and addressed. From now on, i'll > stop resisting the urge to paste the text of proposals inline > (instead of politely posting just URLs) so you won't miss them. I must have missed those postings... don't have time to read all of python-dev anymore :-( > > Done. Didn't know it exists, though (why isn't the PEP# > > in the subject line ?). > > It didn't have a number at the time i posted it. Thank you > for updating the subject line. > > > Since the object can have multiple methods to construct > > iterators, all you need is *one* iterator API. You don't > > need a slot which returns an iterator object -- leave > > that decision to the programmer, e.g. you can have: > > > > for key in dict.xkeys(): > > for value in dict.xvalues(): > > for items in dict.xitems(): > > Three points: > > 1. We have syntactic support for mapping creation and lookup, > and syntactic support for mapping iteration should mirror it. > > 2. IMHO > > for key:value in dict: > > is much easier to read and explain than > > for (key, value) in dict.xitems(): > > (Greg? Could you test this claim with a survey question?) > > To the newcomer, the former is easy to understand at a surface > level. The latter exposes the implementation (an implementation > that is still there in PEP 234, but that the programmer only has > to worry about if they are going deeper and writing custom > iteration behaviour). This separates the work of learning into > two small, digestible pieces. Tuples are well-known basic Python types. Why should (key,value) be any harder to understand than key:value. What would you tell a newbie that writes: for key:value in sequence: .... where sequence is a list of tuples and finds that this doesn't work ? Besides, the items() method has been around for ages, so switching from .items() to .xitems() in programs will be just as easy as switching from range() to xrange(). I am -0 on the key:value thingie. If you want it as a way to construct or split associations, fine. But it is really not necessary to be able to iterate over dictionaries. > 3. Furthermore, this still doesn't solve the backward-compatibility > problem that PEP 234 takes great care to address! If you write > your for-loops > > for (key, value) in dict.xitems(): > > then you are screwed if you try to replace dict with any kind of > user-implemented dictionary-like replacement (since you'd have to > go back and implement the xitems() method on everything). Why is that ? You'd just have to add .xitems() to UserDict and be done with it. This is how we have added new dictionary methods all along. I don't see your point here. Sure, if you want to use a new feature you will have to think about whether it can be used with your data-types. What you are trying to do here is maintain forward compatibility at the cost of making iteration much more complicated than it really is. > If, in order to maintain compatibility with the existing de-facto > dictionary interface, you write your for-loops > > for (key, value) in dict.items(): > > then now you are screwed if dict is a built-in dictionary, since > items() is supposed to construct a list, not an iterator. I'm not breaking backward compatibility -- the above will still work like it has before since lists don't have the tp_nextitem slot. > > for entry in matrix.xrow(1): > > for entry in matrix.xcolumn(2): > > for entry in matrix.xdiag(): > > These are fine, since matrices are not core data types with > syntactic support or a de-facto emulation protocol. > > > for i,element in sequence.xrange(): > > This is just as bad as the xitems() issue above -- probably worse -- > since nobody implements xrange() on sequence-like objects, so now > you've broken compatibility with all of those. > > We want this feature to smoothly extend and work with existing objects > with a minimum of rewriting, ideally none. PEP 234 achieves this ideal. Again, you are trying to achieve forward compatibility. If people want better performance, than they will have to add new functionality to their types -- one way or another. > > Since for-loops can check for the type slot, they can use an > > optimized implementation which avoids the creation of > > temporary integer objects and leave the state-keeping to the > > iterator which can usually provide a C based storage for it with > > much better performance. > > This statement, i believe, is orthogonal to both proposals. > > > Note that with this kind of interface, there is no need to > > add "Mapping Iterators" or "Sequence Iterators" as special > > cases, since these are easily implemented using the above > > iterators. > > I think this really just comes down to one key difference > between our points of view here. Correct me if you disagree: > > You seem to be suggesting that we should only consider a > protocol for sequences, whereas PEP 234 talks about both > sequences and mappings. No. I'm suggesting to add a low-level "give me the next item in the bag" and move the "how to get the next item" logic into an iterator object. This will still allow you to iterate over sequences and mappings, so I don't understand why you keep argueing for adding new syntax and slots to be able to iterate over dictionaries. > I argue that consideration for mappings is worthwhile because: > > 1. Dictionaries are a built-in type with syntactic and > core implementation support. > > 2. Iteration over dictionaries is very common and should > be spelled in an easily understood fashion. > > 3. Both sequence and mapping protocols are formalized in > the core (with PySequenceMethods and PyMappingMethods). > > 4. Both sequence and mapping protocols are documented and > used in Python (__getitem__, keys, values, etc.). > > 5. There are many, many sequence-like and mapping-like > objects out there, implemented both in Python and in C, > which adhere to these protocols. > > (There is also the not-insignificant side benefit of finally > having a decent way to get the indices while you're iterating > over a sequence, which i've wanted fairly often.) Agreed. I'd suggest to implement generic iterators which implements your suggestions and put them into the builins or a special iterator module... from iterators import xitems, xkeys, xvalues for key, value in xitems(dict): for key in xkeys(dict): for value in xvalues(dict): Other objects can then still have their own iterators by exposing special methods which construct special iterators. The for-loop will continue to work as always and happily accept __getitem__ compatible or tp_nextitem compatible objects as right-hand argument. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thomas at xs4all.net Tue Feb 6 13:11:42 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 13:11:42 +0100 Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: <200102051937.OAA01402@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 05, 2001 at 02:37:28PM -0500 References: <3A7EF1A0.EDA4AD24@lemburg.com> <20010205141139.K733@thrak.cnri.reston.va.us> <200102051937.OAA01402@cj20424-a.reston1.va.home.com> Message-ID: <20010206131142.B9551@xs4all.nl> On Mon, Feb 05, 2001 at 02:37:28PM -0500, Guido van Rossum wrote: > (Hmm, I wonder if we could run this on starship.python.net instead? > That machine probably has more spare cycles.) Hmm.... eggs... basket... spam... ham... Given starships's track record I'd hesitate before running it on that :-) But then, 5 years of system administration has made me a highly superstitious person. I-still-boot-old-SCSI-tape-libraries-with-dead-chickens-in-reach-ly y'rs -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From thomas at xs4all.net Tue Feb 6 13:17:31 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 13:17:31 +0100 Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: ; from akuchlin@mems-exchange.org on Mon, Feb 05, 2001 at 12:32:31PM -0500 References: Message-ID: <20010206131731.C9551@xs4all.nl> On Mon, Feb 05, 2001 at 12:32:31PM -0500, Andrew Kuchling wrote: > One thing about the reaction to the 2.1 alphas is that many people > seem *surprised* by some of the changes, even though PEPs have been > written, discussed, and mentioned in python-dev summaries. Maybe the > PEPs and their status need to be given higher visibility; I'd suggest > sending a brief note of status changes (new draft PEPs, acceptance, > rejection) to comp.lang.python.announce. Or, (wait, wait) maybe, (don't shoot me) we should change the python-dev construct (nono, wait, wait!) - that is, instead of it being a write-only list with readable archives, have it be a list completely open for subscription, but with post access to a limited number of people (the current subscribers.) I know of at least two people who want to read python-dev, but not by starting up netscape every day. (One of them already tried subscribing to python-dev once ;) Or perhaps just digests, though I don't really see the benifit of that (or of the current approach, really.) It's just much easier to keep up and comment on features if it arrives in your mailbox every day. (Besides, it would prompt Barry to write easy ways to manage such list of posters, which is slightly lacking in Mailman right now ) Ok-*now*-you-can-shoot-me-ly y'rs -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From ping at lfw.org Tue Feb 6 13:25:58 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 04:25:58 -0800 (PST) Subject: [Python-Dev] Iterators (PEP 234) In-Reply-To: <3A7FE60A.261CEE6A@lemburg.com> Message-ID: On Tue, 6 Feb 2001, M.-A. Lemburg wrote: > > For the third time: write an example, please. It will help a lot. > > Ping, what do you need an example for ? The above sentence says > it all: *sigh* I give up. I'm not going to ask again. Real examples are a good idea when considering any proposal. (a) When you do a real example, you usually discover mistakes or things you didn't think of in your design. (b) We can compare it directly to other examples to see how easy or hard it is to write and understand code that uses the new protocol. (c) We can come up with interesting cases in practice to see if there are limitations in any proposal. Now that you have a proposal in slightly more detail, a few missing pieces are evident. How would you implement a *Python* class that supports iteration? For instance, write something that has the effect of the FileLines class in PEP 234. How would you implement an object that can be iterated over more than once, at the same time or at different times? It's not clear to me how the single tp_nextitem slot can handle that. > Since the for-loop can avoid creating temporary integers, > iterations will generally run a lot faster than before. Also, > iterators have access to the object's internal representation, > so data access is also faster. Again, completely orthogonal to both proposals. Regardless of the protocol, if you're implementing the iterator in C, you can use raw integers and internal access to make it fast. > > 2. IMHO > > > > for key:value in dict: > > > > is much easier to read and explain than > > > > for (key, value) in dict.xitems(): [...] > Tuples are well-known basic Python types. Why should > (key,value) be any harder to understand than key:value. It's mainly the business of calling the method and rearranging the data that i'm concerned about. Example 1: dict = {1: 2, 3: 4} for (key, value) in dict.items(): Explanation: The "items" method on the dict converts {1: 2, 3: 4} into a list of 2-tuples, [(1, 2), (3, 4)]. Then (key, value) is matched against each item of this list, and the two parts of each tuple are unpacked. Example 2: dict = {1: 2, 3: 4} for key:value in dict: Explanation: The "for" loop iterates over the key:value pairs in the dictionary, which you can see are 1:2 and 3:4. > What would you tell a newbie that writes: > > for key:value in sequence: > .... > > where sequence is a list of tuples and finds that this doesn't > work ? "key:value doesn't look like a tuple, does it?" > Besides, the items() method has been around for ages, so switching > from .items() to .xitems() in programs will be just as easy as > switching from range() to xrange(). It's not the same. xrange() is a built-in function that you call; xitems() is a method that you have to *implement*. > > for (key, value) in dict.xitems(): > > > > then you are screwed if you try to replace dict with any kind of > > user-implemented dictionary-like replacement (since you'd have to > > go back and implement the xitems() method on everything). > > Why is that ? You'd just have to add .xitems() to UserDict ...and cgi.FieldStorage, and dumbdbm._Database, and rfc822.Message, and shelve.Shelf, and bsddbmodule, and dbmmodule, and gdbmmodule, to name a few. Even if you expect (or force) people to derive all their dictionary-like Python classes from UserDict (which they don't, in practice), you can't derive C objects from UserDict. > > for (key, value) in dict.items(): > > > > then now you are screwed if dict is a built-in dictionary, since > > items() is supposed to construct a list, not an iterator. > > I'm not breaking backward compatibility -- the above will still > work like it has before since lists don't have the tp_nextitem > slot. What i mean is that Python programmers would no longer know how to write their 'for' loops. Should they use 'xitems', thus dooming their loop never to work with the majority of user-implemented mapping-like objects? Or should they use 'items', thus dooming their loop to run inefficiently on built-in dictionaries? > > We want this feature to smoothly extend and work with existing objects > > with a minimum of rewriting, ideally none. PEP 234 achieves this ideal. > > Again, you are trying to achieve forward compatibility. If people > want better performance, than they will have to add new functionality > to their types -- one way or another. Okay, i agree, it's forward compatibility. But it's something worth going for when you're trying to come up with a protocol. -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From thomas at xs4all.net Tue Feb 6 13:44:47 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 13:44:47 +0100 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <002201c08fa9$079a1f80$770a0a0a@nevex.com>; from gvwilson@ca.baltimore.com on Mon, Feb 05, 2001 at 02:22:50PM -0500 References: <002201c08fa9$079a1f80$770a0a0a@nevex.com> Message-ID: <20010206134447.D9551@xs4all.nl> On Mon, Feb 05, 2001 at 02:22:50PM -0500, Greg Wilson wrote: > OK, now here's the hard one. Clearly, Noshit. I ran into all of this while trying to figure out how to quick-hack implement it. My brain exploded while trying to grasp all implications, which is why I've been quiet on this issue -- I'm healing ;-P > (a) for i in someList: > has to continue to mean "iterate over the values". We've agreed that: > (b) for k:v in someDict: means "iterate through the items". (a) looks > like a special case of (b). I'm still not sure if I like the special syntax to iterate over dictionaries. Are we talking about iterators, or about special syntax to use said iterators in the niche application of dicts and mapping interfaces ? :) > I therefore asked my colleagues to guess what: > (c) for x in someDict: > did. They all said, "Iterates through the _values_ in the dict", > by analogy with (a). But how baffled were they when it didn't do what they expected it to do ? Did they go, 'oh shit, now what' ? > I then asked, "How do you iterate through the keys in a dict, or > the indices in a list?" They guessed: > (d) for x: in someContainer: Again, how baffled were they when you said it wasn't going to work ? Because (c) and (d) are just very light syntactic powdered sugar substitutes for 'k:v' where you just don't use one of the two. The extra name binding operation isn't going to cost you enough to really worry about, IMHO. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tismer at tismer.com Tue Feb 6 13:51:37 2001 From: tismer at tismer.com (Christian Tismer) Date: Tue, 06 Feb 2001 13:51:37 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: <3A7FE60A.261CEE6A@lemburg.com> Message-ID: <3A7FF359.665C184B@tismer.com> "M.-A. Lemburg" wrote: > > Ka-Ping Yee wrote: > > Three points: > > > > 1. We have syntactic support for mapping creation and lookup, > > and syntactic support for mapping iteration should mirror it. > > > > 2. IMHO > > > > for key:value in dict: > > > > is much easier to read and explain than > > > > for (key, value) in dict.xitems(): > > > > (Greg? Could you test this claim with a survey question?) > > > > To the newcomer, the former is easy to understand at a surface > > level. The latter exposes the implementation (an implementation > > that is still there in PEP 234, but that the programmer only has > > to worry about if they are going deeper and writing custom > > iteration behaviour). This separates the work of learning into > > two small, digestible pieces. > > Tuples are well-known basic Python types. Why should > (key,value) be any harder to understand than key:value. > What would you tell a newbie that writes: > > for key:value in sequence: > .... > > where sequence is a list of tuples and finds that this doesn't > work ? Sorry about sneaking in. I do in fact think that the syntax addition of key:value is easier to understand. Beginners know the { key:value } syntax, so this is just natural. Givin him an error in your above example is a step to clarity, avoiding hard to find errors if somebody has a list of tuples and the above happens to work somehow, although he forgot to use .xitems(). > Besides, the items() method has been around for ages, so switching > from .items() to .xitems() in programs will be just as easy as > switching from range() to xrange(). It has been around for years, but key:value might be better. A little faster for sure since we don't build extra tuples. > I am -0 on the key:value thingie. If you want it as a way to > construct or split associations, fine. But it is really not > necessary to be able to iterate over dictionaries. > > > 3. Furthermore, this still doesn't solve the backward-compatibility > > problem that PEP 234 takes great care to address! If you write > > your for-loops > > > > for (key, value) in dict.xitems(): > > > > then you are screwed if you try to replace dict with any kind of > > user-implemented dictionary-like replacement (since you'd have to > > go back and implement the xitems() method on everything). > > Why is that ? You'd just have to add .xitems() to UserDict and > be done with it. This is how we have added new dictionary methods > all along. I don't see your point here. You really wouldn't stick with UserDict, but implement this on every object for speed. The key:value proposal is not only stronger through its extra syntactical strength, it is also smaller in code-size to implement. Having to force every "iterable" object to support a modified view of it via xitems() even doesn't look elegant to me. It forces key/value pairs to go through tupleization only for syntactical reasons. A weakness, not a strength. Object orientation gets at its limits here. If access to keys and values can be provided by a single implementation for all affected objects without adding new methods, this suggests to me that it is right to do so. +1 on key:value - ciao - chris -- Christian Tismer :^) Mission Impossible 5oftware : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF where do you want to jump today? http://www.stackless.com From gvwilson at ca.baltimore.com Tue Feb 6 14:00:26 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Tue, 6 Feb 2001 08:00:26 -0500 (EST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: Message-ID: > > > > On Mon, 5 Feb 2001, Greg Wilson wrote: > > > > Based on my very-informal survey, if: > > > > for i in someList: > > > > works, then many people will assume that: > > > > for i in someDict: > > > > will also work, and yield values. > > > Ka-Ping Yee: > > > ...the latter is ambiguous (keys or values?)... > > Greg Wilson > > The latter is exactly as ambiguous as the former... I think this > > is a case where your (intimate) familiarity with the way Python > > works now is preventing you from getting into newbie headspace... > Ka-Ping Yee: > No, i don't think so. It seems quite possible to argue from first > principles that if you ask to iterate over things "in" a sequence, > you clearly want the items in the sequence, not their integer indices. Greg Wilson: Well, arguing from first principles, Aristotle was able to demonstrate that heavy objects fall faster than light ones :-). I'm basing my claim on the kind of errors students in my course make. Even after being shown half-a-dozen examples of Python for loops, many of them write: for i in someSequence: print someSequence[i] in their first exercise. Thanks, Greg From mal at lemburg.com Tue Feb 6 14:16:22 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 14:16:22 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: Message-ID: <3A7FF926.BBFB3E99@lemburg.com> Ka-Ping Yee wrote: > > On Tue, 6 Feb 2001, M.-A. Lemburg wrote: > > > For the third time: write an example, please. It will help a lot. > > > > Ping, what do you need an example for ? The above sentence says > > it all: > > *sigh* I give up. I'm not going to ask again. > > Real examples are a good idea when considering any proposal. > > (a) When you do a real example, you usually discover > mistakes or things you didn't think of in your design. > > (b) We can compare it directly to other examples to see > how easy or hard it is to write and understand code > that uses the new protocol. > > (c) We can come up with interesting cases in practice to > see if there are limitations in any proposal. > > Now that you have a proposal in slightly more detail, a few > missing pieces are evident. > > How would you implement a *Python* class that supports iteration? > For instance, write something that has the effect of the FileLines > class in PEP 234. I was just throwing in ideas, not a complete proposal. If that's what you want I can write up a complete proposal too and maybe even a patch to go with it. Exposing the tp_nextitem slot in Python classes via a __nextitem__ slot wouldn't be much of a problem. What I wanted to get across is the general idea behind my view of an iteration API and I believe that this idea has been made clear: I want a low-level API and move all the complicated object specific details into separate iterator objects. I don't see a point in trying to add complicated machinery to Python just to be able to iterate fast over some of the builtin types by special casing each object type. Let's please not add more special cases to the core. > How would you implement an object that can be iterated over more > than once, at the same time or at different times? It's not clear > to me how the single tp_nextitem slot can handle that. Put all that logic into the iterator objects. These can be as complicated as needed, either trying to work in generic ways, special cased for some builtin types or be specific to a single type. > > Since the for-loop can avoid creating temporary integers, > > iterations will generally run a lot faster than before. Also, > > iterators have access to the object's internal representation, > > so data access is also faster. > > Again, completely orthogonal to both proposals. Regardless of > the protocol, if you're implementing the iterator in C, you can > use raw integers and internal access to make it fast. > > > > 2. IMHO > > > > > > for key:value in dict: > > > > > > is much easier to read and explain than > > > > > > for (key, value) in dict.xitems(): > [...] > > Tuples are well-known basic Python types. Why should > > (key,value) be any harder to understand than key:value. > > It's mainly the business of calling the method and rearranging > the data that i'm concerned about. > > Example 1: > > dict = {1: 2, 3: 4} > for (key, value) in dict.items(): > > Explanation: > > The "items" method on the dict converts {1: 2, 3: 4} into > a list of 2-tuples, [(1, 2), (3, 4)]. Then (key, value) is > matched against each item of this list, and the two parts > of each tuple are unpacked. > > Example 2: > > dict = {1: 2, 3: 4} > for key:value in dict: > > Explanation: > > The "for" loop iterates over the key:value pairs in the > dictionary, which you can see are 1:2 and 3:4. Again, if you prefer the key:value notation, fine. This is orthogonal to the iteration API though and really only touches the case of mappings. > > Besides, the items() method has been around for ages, so switching > > from .items() to .xitems() in programs will be just as easy as > > switching from range() to xrange(). > > It's not the same. xrange() is a built-in function that you call; > xitems() is a method that you have to *implement*. You can put all that special logic into special iterators, e.g. a xitems iterator (see the end of my post). > > > for (key, value) in dict.xitems(): > > > > > > then you are screwed if you try to replace dict with any kind of > > > user-implemented dictionary-like replacement (since you'd have to > > > go back and implement the xitems() method on everything). > > > > Why is that ? You'd just have to add .xitems() to UserDict > > ...and cgi.FieldStorage, and dumbdbm._Database, and rfc822.Message, > and shelve.Shelf, and bsddbmodule, and dbmmodule, and gdbmmodule, > to name a few. Even if you expect (or force) people to derive all > their dictionary-like Python classes from UserDict (which they don't, > in practice), you can't derive C objects from UserDict. The same applies to your proposed interface: people will have to write new code in order to be able to use the new technology. I don't see that as a problem, though. > > > for (key, value) in dict.items(): > > > > > > then now you are screwed if dict is a built-in dictionary, since > > > items() is supposed to construct a list, not an iterator. > > > > I'm not breaking backward compatibility -- the above will still > > work like it has before since lists don't have the tp_nextitem > > slot. > > What i mean is that Python programmers would no longer know how to > write their 'for' loops. Should they use 'xitems', thus dooming > their loop never to work with the majority of user-implemented > mapping-like objects? Or should they use 'items', thus dooming > their loop to run inefficiently on built-in dictionaries? Hey, people who care will be aware of this difference. It is very easy to test for interfaces in Python, so detecting the best method (in case it matters) is simple. > > > We want this feature to smoothly extend and work with existing objects > > > with a minimum of rewriting, ideally none. PEP 234 achieves this ideal. > > > > Again, you are trying to achieve forward compatibility. If people > > want better performance, than they will have to add new functionality > > to their types -- one way or another. > > Okay, i agree, it's forward compatibility. But it's something > worth going for when you're trying to come up with a protocol. Sure, but is adding special cases everywhere really worth it ? From mal at lemburg.com Tue Feb 6 14:26:26 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 14:26:26 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: <3A7FE60A.261CEE6A@lemburg.com> <3A7FF359.665C184B@tismer.com> Message-ID: <3A7FFB82.30BE0703@lemburg.com> Christian Tismer wrote: > > "M.-A. Lemburg" wrote: > > > > Tuples are well-known basic Python types. Why should > > (key,value) be any harder to understand than key:value. > > What would you tell a newbie that writes: > > > > for key:value in sequence: > > .... > > > > where sequence is a list of tuples and finds that this doesn't > > work ? > > Sorry about sneaking in. I do in fact think that the syntax > addition of key:value is easier to understand. Beginners > know the { key:value } syntax, so this is just natural. > Givin him an error in your above example is a step to clarity, > avoiding hard to find errors if somebody has a list of > tuples and the above happens to work somehow, although he > forgot to use .xitems(). The problem is that key:value in sequence has a meaning under PEP234: key is the current index, value the tuple. > > Besides, the items() method has been around for ages, so switching > > from .items() to .xitems() in programs will be just as easy as > > switching from range() to xrange(). > > It has been around for years, but key:value might be better. > A little faster for sure since we don't build extra tuples. Small tuples are cheap and kept on the free list. I don't even think that key:value can do without them. Anyway, I've already said that I'm -0 on these thingies -- I would be +1 if Ping were to make key:value full flavoured associations (Jim Fulton has written a lot about these some years ago; I think they originated from SmallTalk). > > I am -0 on the key:value thingie. If you want it as a way to > > construct or split associations, fine. But it is really not > > necessary to be able to iterate over dictionaries. > > > > > 3. Furthermore, this still doesn't solve the backward-compatibility > > > problem that PEP 234 takes great care to address! If you write > > > your for-loops > > > > > > for (key, value) in dict.xitems(): > > > > > > then you are screwed if you try to replace dict with any kind of > > > user-implemented dictionary-like replacement (since you'd have to > > > go back and implement the xitems() method on everything). > > > > Why is that ? You'd just have to add .xitems() to UserDict and > > be done with it. This is how we have added new dictionary methods > > all along. I don't see your point here. > > You really wouldn't stick with UserDict, but implement this > on every object for speed. > The key:value proposal is not only stronger through its extra > syntactical strength, it is also smaller in code-size to implement. ...but it's a special case which we don't really need and it *only* works for mappings and then only if the mapping supports the new slots and methods required by PEP234. I don't buy the argument that PEP234 buys us fast iteration for free. Programmers will still have to write the code to implement the new slots and methods. > Having to force every "iterable" object to support a modified > view of it via xitems() even doesn't look elegant to me. > It forces key/value pairs to go through tupleization only > for syntactical reasons. A weakness, not a strength. > Object orientation gets at its limits here. If access to keys > and values can be provided by a single implementation for > all affected objects without adding new methods, this suggests > to me that it is right to do so. Hey, tuples are created for *every* function call, even C calls -- you can't be serious about getting much of a gain here ;-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tismer at tismer.com Tue Feb 6 14:43:31 2001 From: tismer at tismer.com (Christian Tismer) Date: Tue, 06 Feb 2001 14:43:31 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: <3A7FE60A.261CEE6A@lemburg.com> <3A7FF359.665C184B@tismer.com> <3A7FFB82.30BE0703@lemburg.com> Message-ID: <3A7FFF83.28FAB74F@tismer.com> "M.-A. Lemburg" wrote: > > Christian Tismer wrote: > > > > "M.-A. Lemburg" wrote: > > > > > > Tuples are well-known basic Python types. Why should > > > (key,value) be any harder to understand than key:value. > > > What would you tell a newbie that writes: > > > > > > for key:value in sequence: > > > .... > > > > > > where sequence is a list of tuples and finds that this doesn't > > > work ? > > > > Sorry about sneaking in. I do in fact think that the syntax > > addition of key:value is easier to understand. Beginners > > know the { key:value } syntax, so this is just natural. > > Givin him an error in your above example is a step to clarity, > > avoiding hard to find errors if somebody has a list of > > tuples and the above happens to work somehow, although he > > forgot to use .xitems(). > > The problem is that key:value in sequence has a meaning under PEP234: > key is the current index, value the tuple. Why is this a problem? It is just fine. > > > Besides, the items() method has been around for ages, so switching > > > from .items() to .xitems() in programs will be just as easy as > > > switching from range() to xrange(). > > > > It has been around for years, but key:value might be better. > > A little faster for sure since we don't build extra tuples. > > Small tuples are cheap and kept on the free list. I don't even > think that key:value can do without them. a) I don't see a point to tell me about Python's implementation but for hair-splitting. Speed is not the point, it will just be faster. b) I think it can. But the point is the cleaner syntax which unambigously gets you keys and values, whenether the thing on the right can be indexed. > Anyway, I've already said that I'm -0 on these thingies -- I would > be +1 if Ping were to make key:value full flavoured associations > (Jim Fulton has written a lot about these some years ago; I think > they originated from SmallTalk). I didn't read that yet. Would it contradict Ping's version or could it be extended laer? ... > > Having to force every "iterable" object to support a modified > > view of it via xitems() even doesn't look elegant to me. > > It forces key/value pairs to go through tupleization only > > for syntactical reasons. A weakness, not a strength. > > Object orientation gets at its limits here. If access to keys > > and values can be provided by a single implementation for > > all affected objects without adding new methods, this suggests > > to me that it is right to do so. > > Hey, tuples are created for *every* function call, even C calls > -- you can't be serious about getting much of a gain here ;-) You are reducing my arguments to speed always, not me. I don't care about a tuple. But I think we can save code. Smaller *and* not slower is what I like. no offence - ly y'rs - chris -- Christian Tismer :^) Mission Impossible 5oftware : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF where do you want to jump today? http://www.stackless.com From mal at lemburg.com Tue Feb 6 14:57:14 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 14:57:14 +0100 Subject: [Python-Dev] Iterators (PEP 234) References: <3A7FE60A.261CEE6A@lemburg.com> <3A7FF359.665C184B@tismer.com> <3A7FFB82.30BE0703@lemburg.com> <3A7FFF83.28FAB74F@tismer.com> Message-ID: <3A8002BA.5A0EEDE9@lemburg.com> Christian Tismer wrote: > > "M.-A. Lemburg" wrote: > > > > Besides, the items() method has been around for ages, so switching > > > > from .items() to .xitems() in programs will be just as easy as > > > > switching from range() to xrange(). > > > > > > It has been around for years, but key:value might be better. > > > A little faster for sure since we don't build extra tuples. > > > > Small tuples are cheap and kept on the free list. I don't even > > think that key:value can do without them. > > a) I don't see a point to tell me about Python's implementation > but for hair-splitting. I'm not telling you (I know you know ;), but others on this list which may not be aware of this fact. > Speed is not the point, it will just be > faster. b) I think it can. > But the point is the cleaner syntax which unambigously gets > you keys and values, whenether the thing on the right can be indexed. > > > Anyway, I've already said that I'm -0 on these thingies -- I would > > be +1 if Ping were to make key:value full flavoured associations > > (Jim Fulton has written a lot about these some years ago; I think > > they originated from SmallTalk). > > I didn't read that yet. Would it contradict Ping's version or > could it be extended laer? Ping's version would hide this detail under the cover: dictionaries would sort of implement the sequence protocol and then return associations. I don't think this is much of a problem though. > ... > > > Having to force every "iterable" object to support a modified > > > view of it via xitems() even doesn't look elegant to me. > > > It forces key/value pairs to go through tupleization only > > > for syntactical reasons. A weakness, not a strength. > > > Object orientation gets at its limits here. If access to keys > > > and values can be provided by a single implementation for > > > all affected objects without adding new methods, this suggests > > > to me that it is right to do so. > > > > Hey, tuples are created for *every* function call, even C calls > > -- you can't be serious about getting much of a gain here ;-) > > You are reducing my arguments to speed always, not me. > I don't care about a tuple. But I think we can save > code. Smaller *and* not slower is what I like. At the cost of: * special casing the for-loop implementation for sequences, mappings * adding half a dozen new slots and methods * moving all the complicated details into the for-loop implementation instead of keeping them in separate modules or object specific implementations Perhaps we are just discussing the wrong things: I believe that Ping's PEP could easily be implemented on top of my idea (or vice-versa depending on how you look at it) of how the iteration interface should look like. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From paulp at ActiveState.com Tue Feb 6 15:44:12 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Tue, 06 Feb 2001 06:44:12 -0800 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> Message-ID: <3A800DBC.2BE8ECEF@ActiveState.com> "M.-A. Lemburg" wrote: > > [pre-PEP] > > You have a lot of good points in there (also some inaccuracies) and > I agree that Python should move to using Unicode for text data > and arrays for binary data. That's my primary goal. If we can all agree that is the goal then we can start to design new features with that mind. I'm overjoyed to have you on board. I'm pretty sure Fredrick agrees with the goals (probably not every implementation detail). I'll send to i18n sig and see if I can get buy-in from Andy Robinson et. al. Then it's just Guido. > Some things you may be missing though is that Python already > has support for a few features you mention, e.g. codecs.open() > provide more or less what you have in mind with fopen() and > the compiler can already unify Unicode and string literals using > the -U command line option. The problem with unifying string literals without unifying string *types* is that many functions probably check for and type("") not type(u""). > What you don't talk about in the PEP is that Python's stdlib isn't > even Unicode aware yet, and whatever unification steps we take, > this project will have to preceed it. I'm not convinced that is true. We should be able to figure it out quickly though. > The problem with making the > stdlib Unicode aware is that of deciding which parts deal with > text data or binary data -- the code sometimes makes assumptions > about the nature of the data and at other times it simply doesn't > care. Can you give an example? If the new string type is 100% backwards compatible in every way with the old string type then the only code that should break is silly code that did stuff like: try: something = chr( somethingelse ) except ValueError: print "Unicode is evil!" Note that I expect types.StringType == types(chr(10000)) etc. > In this light I think you ought to focus Python 3k with your > PEP. This will also enable better merging techniques due to the > lifting of the type/class difference. Python3K is a beautiful dream but we have problems we need to solve today. We could start moving to a Unicode future in baby steps right now. Your "open" function could be moved into builtins as "fopen". Python's "binary" open function could be deprecated under its current name and perhaps renamed. The sooner we start the sooner we finish. You and /F laid some beautiful groundwork. Now we just need to keep up the momentum. I think we can do this without a big backwards compatibility earthquake. VB and TCL figured out how to do it... Paul Prescod From thomas at xs4all.net Tue Feb 6 15:57:12 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 15:57:12 +0100 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: <20010205110422.A5893@glacier.fnational.com>; from nas@arctrix.com on Mon, Feb 05, 2001 at 11:04:22AM -0800 References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> <200102051837.NAA00833@cj20424-a.reston1.va.home.com> <20010205110422.A5893@glacier.fnational.com> Message-ID: <20010206155712.E9551@xs4all.nl> On Mon, Feb 05, 2001 at 11:04:22AM -0800, Neil Schemenauer wrote: > On Mon, Feb 05, 2001 at 01:37:39PM -0500, Guido van Rossum wrote: > > Now, can you do things like this: > [example cut] > No, it would have to be written like this: > >>> from types import * > >>> class MyInt(IntType): # add a method > def add1(self): return self.value+1 Why ? Couldn't IntType do with an __add__[*] method that does this ugly magic for you ? Same for __sub__, __int__ and so on. *] Yes, yes, I know, it's a type, not a class, but you know what I mean :) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mal at lemburg.com Tue Feb 6 16:09:46 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 16:09:46 +0100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> Message-ID: <3A8013BA.2FF93E8B@lemburg.com> Paul Prescod wrote: > > "M.-A. Lemburg" wrote: > > > > [pre-PEP] > > > > You have a lot of good points in there (also some inaccuracies) and > > I agree that Python should move to using Unicode for text data > > and arrays for binary data. > > That's my primary goal. If we can all agree that is the goal then we can > start to design new features with that mind. I'm overjoyed to have you > on board. I'm pretty sure Fredrick agrees with the goals (probably not > every implementation detail). I'll send to i18n sig and see if I can get > buy-in from Andy Robinson et. al. Then it's just Guido. Oh, I think that everybody agrees on moving to Unicode as basic text storage container. The question is how to get there ;-) Today we are facing a problem in that strings are also used as containers for binary data and no distinction is made between the two. We also have to watch out for external interfaces which still use 8-bit character data, so there's a lot ahead. > > Some things you may be missing though is that Python already > > has support for a few features you mention, e.g. codecs.open() > > provide more or less what you have in mind with fopen() and > > the compiler can already unify Unicode and string literals using > > the -U command line option. > > The problem with unifying string literals without unifying string > *types* is that many functions probably check for and type("") not > type(u""). Well, with -U on, Python will compile "" into u"", so you can already test Unicode compatibility today... last I tried, Python didn't even start up :-( > > What you don't talk about in the PEP is that Python's stdlib isn't > > even Unicode aware yet, and whatever unification steps we take, > > this project will have to preceed it. > > I'm not convinced that is true. We should be able to figure it out > quickly though. We can use that knowledge to base future design upon. The problem with many stdlib modules is that they don't make a difference between text and binary data (and often can't, e.g. take sockets), so we'll have to figure out a way to differentiate between the two. We'll also need an easy-to-use binary data type -- as you mention in the PEP, we could take the old string implementation as basis and then perhaps turn u"" into "" and use b"" to mean what "" does now (string object). > > The problem with making the > > stdlib Unicode aware is that of deciding which parts deal with > > text data or binary data -- the code sometimes makes assumptions > > about the nature of the data and at other times it simply doesn't > > care. > > Can you give an example? If the new string type is 100% backwards > compatible in every way with the old string type then the only code that > should break is silly code that did stuff like: > > try: > something = chr( somethingelse ) > except ValueError: > print "Unicode is evil!" > > Note that I expect types.StringType == types(chr(10000)) etc. Sure, but there are interfaces which don't differentiate between text and binary data, e.g. many IO-operations don't care about what exactly they are writing or reading. We'd probably define a new set of text data APIs (meaning methods) to make this difference clear and visible, e.g. .writetext() and .readtext(). > > In this light I think you ought to focus Python 3k with your > > PEP. This will also enable better merging techniques due to the > > lifting of the type/class difference. > > Python3K is a beautiful dream but we have problems we need to solve > today. We could start moving to a Unicode future in baby steps right > now. Your "open" function could be moved into builtins as "fopen". > Python's "binary" open function could be deprecated under its current > name and perhaps renamed. Hmm, I'd prefer to keep things separate for a while and then switch over to new APIs once we get used to them. > The sooner we start the sooner we finish. You and /F laid some beautiful > groundwork. Now we just need to keep up the momentum. I think we can do > this without a big backwards compatibility earthquake. VB and TCL > figured out how to do it... ... and we should probably try to learn from them. They have put a considerable amount of work into getting the low-level interfacing issues straight. It would be nice if we could avoid adding more conversion magic... -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From Barrett at stsci.edu Tue Feb 6 16:33:34 2001 From: Barrett at stsci.edu (Paul Barrett) Date: Tue, 6 Feb 2001 10:33:34 -0500 (EST) Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: References: <20010205170106.D990@thrak.cnri.reston.va.us> Message-ID: <14976.5900.472169.467422@nem-srvr.stsci.edu> Tim Peters writes: > > About people not moving to 2.0, the single specific reason I hear most often > hinges on presumed lack of GPL compatibility. But then people worried about > that *have* a specific reason stopping them. For everyone else, I know > sysadmins who still refuse to move up from Perl 4. > > BTW, we recorded thousands of downloads of 2.0 betas at BeOpen.com, and > indeed more than 10,000 of the Windows installer alone. Then their download > stats broke. SF's have been broken for a long time. So while we have no > idea how many people are downloading now, the idea that people stayed away > from 2.0 in droves is wrong. And 2.0-specific examples are common on c.l.py > now from lots of people too. I agree. I think people are moving to 2.0, but not at the rate of keeping-up with the current release cycle. By the time 2/3 of them have installed 2.0, 2.1 will be released. So what's the point of installing 2.0, when a few weeks later, you have to install 2.1? The situation at our institution is a good indicator of this: 2.0 becomes the default this week. -- Dr. Paul Barrett Space Telescope Science Institute Phone: 410-338-4475 ESS/Science Software Group FAX: 410-338-4767 Baltimore, MD 21218 From paulp at ActiveState.com Tue Feb 6 16:54:49 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Tue, 06 Feb 2001 07:54:49 -0800 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> <3A8013BA.2FF93E8B@lemburg.com> Message-ID: <3A801E49.F8DF70E2@ActiveState.com> "M.-A. Lemburg" wrote: > > ... > > Oh, I think that everybody agrees on moving to Unicode as > basic text storage container. The last time we went around there was an anti-Unicode faction who argued that adding Unicode support was fine but making it the default would inconvenience Japanese users. > ... > Well, with -U on, Python will compile "" into u"", so you can > already test Unicode compatibility today... last I tried, Python > didn't even start up :-( I'm going to say again that I don't see that as a test of Unicode-compatibility. It is a test of compatibility with our existing Unicode object. If we simply allowed string objects to support higher character numbers I *cannot see* how that could break existing code. > ... > We can use that knowledge to base future design upon. The problem > with many stdlib modules is that they don't make a difference > between text and binary data (and often can't, e.g. take sockets), > so we'll have to figure out a way to differentiate between the > two. We'll also need an easy-to-use binary data type -- as you > mention in the PEP, we could take the old string implementation > as basis and then perhaps turn u"" into "" and use b"" to mean > what "" does now (string object). I agree that we need all of this but I strongly disagree that there is any dependency relationship between improving the Unicode-awareness of I/O routines (sockets and files) and allowing string objects to support higher character numbers. I claim that allowing higher character numbers in strings will not break socket objects. It might simply be the case that for a while socket objects never create these higher charcters. Similarly, we could improve socket objects so that they have different readtext/readbinary and writetext/writebinary without unifying the string objects. There are lots of small changes we can make without breaking anything. One I would like to see right now is a unification of chr() and unichr(). We are just making life harder for ourselves by walking further and further down one path when "everyone agrees" that we are eventually going to end up on another path. > ... It would be nice if we could avoid > adding more conversion magic... We already have more "magic" in our conversions than we need. I don't think I'm proposing any new conversions. Paul Prescod From ping at lfw.org Tue Feb 6 17:59:04 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 08:59:04 -0800 (PST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: Message-ID: On Tue, 6 Feb 2001, Greg Wilson wrote: > I'm basing my claim on the kind > of errors students in my course make. Even after being shown half-a-dozen > examples of Python for loops, many of them write: > > for i in someSequence: > print someSequence[i] > > in their first exercise. Amazing (to me). Thank you for this data point; it's news to me. I don't know what that means we should do, though. We can't break the way existing loops work. What would make for-loops easier to present, given this experience? -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From gvwilson at ca.baltimore.com Tue Feb 6 18:28:59 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Tue, 6 Feb 2001 12:28:59 -0500 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: Message-ID: <001101c09062$4af68ac0$770a0a0a@nevex.com> > On Tue, 6 Feb 2001, Greg Wilson wrote: > > Even after being shown half-a-dozen > > examples of Python for loops, many of them write: > > > > for i in someSequence: > > print someSequence[i] > > > > in their first exercise. > Ka-Ping Yee: > Amazing (to me). Thank you for this data point; it's news to me. Greg Wilson: To be fair, these are all people with some previous programming experience --- I suspect (no proof) that Fortran/C/Java have trained them to think that iteration is over index space, rather than value space. It'd be interesting to check the intuitions of students who'd been raised on the C++ STL's iterators, but I don't think that'll ever be possible --- C++ seems to be dropping out of the undergrad curriculum in favor of Java. By the way, I do *not* think this is a knock-down argument against your proposal --- it's no more of a wart than needing the trailing comma in singleton tuples like "(3,)". However: 1. Special cases make teaching harder (he said, repeating the obvious yet again). 2. I expect that if it was added, the "traditional" for-loop syntax would eventually fall into disfavor, since people who want to write really general functions over collections would have to use the new syntax. Thanks, Greg p.s. in case no-one has said it, or I've missed it, thanks very much for putting the PEP together so quickly, and for bringing so many of the issues into focus. From fredrik at effbot.org Tue Feb 6 18:41:55 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 6 Feb 2001 18:41:55 +0100 Subject: [Python-Dev] Fw: list.index(..) -> TypeError bug or feature? Message-ID: <01c601c09065$260bad50$e46940d5@hagrid> (from comp.lang.python) can this be fixed? should this be fixed? (please?) ----- Original Message ----- From: "Pearu Peterson" Newsgroups: comp.lang.python Sent: Tuesday, February 06, 2001 2:42 PM Subject: list.index(..) -> TypeError bug or feature? > > In Python 2.1a2 I get TypeError exception from list index() method even if > the list contains given object: > > >>> from gmpy import mpz > >>> a = [mpz(1),[]] > >>> a.index([]) > Traceback (most recent call last): > File " ", line 1, in ? > TypeError: coercion to gmpy.mpz type failed > > while in Python 2.0b2 it works: > > >>> a = [mpz(1),[]] > >>> a.index([]) > 1 > > Is this Python 2.1a2 bug or gmpy bug? Or my bug and Python 2.1 feature? > > Thanks, > Pearu From mal at lemburg.com Tue Feb 6 19:01:58 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 06 Feb 2001 19:01:58 +0100 Subject: [Python-Dev] Fw: list.index(..) -> TypeError bug or feature? References: <01c601c09065$260bad50$e46940d5@hagrid> Message-ID: <3A803C16.7121C9B8@lemburg.com> Fredrik Lundh wrote: > > (from comp.lang.python) > > can this be fixed? should this be fixed? (please?) Depends on whether gmpy (what is this, BTW) uses the old coercion mechanism correctly or not which is hard to say from here ;) Also, was gmpy recompiled for 2.1a2 and which part raised the exception (Python or gmpy) ? In any case, I'd say that .index() should not raise TypeErrors in case coercion fails. > > > ----- Original Message ----- > From: "Pearu Peterson" > Newsgroups: comp.lang.python > Sent: Tuesday, February 06, 2001 2:42 PM > Subject: list.index(..) -> TypeError bug or feature? > > > > > In Python 2.1a2 I get TypeError exception from list index() method even if > > the list contains given object: > > > > >>> from gmpy import mpz > > >>> a = [mpz(1),[]] > > >>> a.index([]) > > Traceback (most recent call last): > > File " ", line 1, in ? > > TypeError: coercion to gmpy.mpz type failed > > > > while in Python 2.0b2 it works: > > > > >>> a = [mpz(1),[]] > > >>> a.index([]) > > 1 > > > > Is this Python 2.1a2 bug or gmpy bug? Or my bug and Python 2.1 feature? > > > > Thanks, > > Pearu > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From nas at arctrix.com Tue Feb 6 19:06:09 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Tue, 6 Feb 2001 10:06:09 -0800 Subject: [Python-Dev] Type/class differences (Re: Sets: elt in dict, lst.include) In-Reply-To: <20010206155712.E9551@xs4all.nl>; from thomas@xs4all.net on Tue, Feb 06, 2001 at 03:57:12PM +0100 References: <200102012245.LAA03402@s454.cosc.canterbury.ac.nz> <200102050447.XAA28915@cj20424-a.reston1.va.home.com> <20010205070222.A5287@glacier.fnational.com> <200102051837.NAA00833@cj20424-a.reston1.va.home.com> <20010205110422.A5893@glacier.fnational.com> <20010206155712.E9551@xs4all.nl> Message-ID: <20010206100609.B7790@glacier.fnational.com> On Tue, Feb 06, 2001 at 03:57:12PM +0100, Thomas Wouters wrote: > Why ? Couldn't IntType do with an __add__[*] method that does this ugly magic > for you ? Same for __sub__, __int__ and so on. You're right. I'm pretty sure my modified interpreter would handle "return self+1" just fine (I can't test it right now). If you wanted to override the __add__ method you would have to write "return IntType.__add__(self, 1)". Neil From pearu at cens.ioc.ee Tue Feb 6 19:52:38 2001 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 6 Feb 2001 20:52:38 +0200 (EET) Subject: [Python-Dev] Fw: list.index(..) -> TypeError bug or feature? In-Reply-To: <3A803C16.7121C9B8@lemburg.com> Message-ID: On Tue, 6 Feb 2001, M.-A. Lemburg wrote: > Fredrik Lundh wrote: > > > > (from comp.lang.python) > > > > can this be fixed? should this be fixed? (please?) > > Depends on whether gmpy (what is this, BTW) uses the old coercion > mechanism correctly or not which is hard to say from here ;) About gmpy, see http://gmpy.sourceforge.net/ > Also, was gmpy recompiled for 2.1a2 and which part raised the > exception (Python or gmpy) ? gmpy was recompiled for 2.1a2, though the same gmpy worked fine with 2.0b2. The exception was raised in gmpy part. > In any case, I'd say that .index() should not raise TypeErrors > in case coercion fails. I fixed this in gmpy source --- there the Pymp*_coerce functions raised an exception instead of returning `1' when coerce failed. So, this was gmpy bug, Python 2.1a2 just revealed it. Regards, Pearu From esr at snark.thyrsus.com Tue Feb 6 20:06:00 2001 From: esr at snark.thyrsus.com (Eric S. Raymond) Date: Tue, 6 Feb 2001 14:06:00 -0500 Subject: [Python-Dev] fp vs. fd Message-ID: <200102061906.f16J60x11156@snark.thyrsus.com> There are a number of places in the Python library that require a numeric file descriptor, rather than a file object. This complicates code slightly and (IMO) breaches the wrapper around the file-object abstraction (which Guido says is only supposed to depend on stdio-level stuff). Are there design reasons for this, or is it historical accident? If the latter, I'll go through and fix these to accept either an fd or an fp. And fix the docs, too. -- Eric S. Raymond Non-cooperation with evil is as much a duty as cooperation with good. -- Mohandas Gandhi From ping at lfw.org Tue Feb 6 20:01:03 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 11:01:03 -0800 (PST) Subject: [Python-Dev] fp vs. fd In-Reply-To: <200102061906.f16J60x11156@snark.thyrsus.com> Message-ID: On Tue, 6 Feb 2001, Eric S. Raymond wrote: > There are a number of places in the Python library that require a > numeric file descriptor, rather than a file object. I'm curious... where? -- ?!ng From ping at lfw.org Tue Feb 6 20:00:02 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 11:00:02 -0800 (PST) Subject: [Python-Dev] Coercion and comparisons In-Reply-To: <01c601c09065$260bad50$e46940d5@hagrid> Message-ID: On Tue, 6 Feb 2001, Fredrik Lundh wrote: > > can this be fixed? should this be fixed? (please?) I'm not sure. The gmpy example: > > >>> a = [mpz(1),[]] > > >>> a.index([]) > > Traceback (most recent call last): > > File " ", line 1, in ? > > TypeError: coercion to gmpy.mpz type failed seems to be just one case of coercion failure. I no longer have Python 2.0 in a state on my machine where i can compile gmpy to test with it, but you can perform the same exercise with the mpz module in 2.1a2: >>> import mpz >>> [mpz.mpz(1), []].index([]) Traceback (most recent call last): File " ", line 1, in ? TypeError: number coercion (to mpzobject) failed The following test shows that the issue is present for Python classes too: >>> class Foo: ... def __coerce__(self, other): ... raise TypeError, 'coercion failed' ... >>> f = Foo() >>> s = [3, f, 5] >>> s.index(3) 0 >>> s.index(5) Traceback (most recent call last): File " ", line 1, in ? File " ", line 3, in __coerce__ TypeError: coercion failed I get the above behaviour in 1.5.2, 2.0, and 2.1a2. So now we have to ask whether index() should hide these errors. It seems to me that conventional Python philosophy would argue to let the errors flaunt themselves as early as possible, but i agree with you that the failure to find [] in [mpz(1), []] is pretty jarring. ?? Hmm, i think perhaps the right answer is to not coerce before ==, even if we automatically coerce before the other comparison operators. But, this is only good as a future possibility. It can't resolve the issue for existing extension modules because their old-style comparison functions appear to expect two arguments of the same type: (in mpzmodule.c) static int mpz_compare(mpzobject *a, mpzobject *b) { int cmpres; /* guido sez it's better to return -1, 0 or 1 */ return (cmpres = mpz_cmp( &a->mpz, &b->mpz )) == 0 ? 0 : cmpres > 0 ? 1 : -1; } /* mpz_compare() */ ...so the error occurs before tp_compare has a chance to say "okay, it's not equal". We have to ask the authors of extension modules to implement == separately from the other comparisons. Note, by the way, that this re-raises the matter of the three kinds of equality that i remember mentioning back when we were discussing rich comparisons. I'll restate them here for you to think about. The three kinds of equality (in order by strength) are: 1. Identity. Python: 'x is y' E: 'x == y' Python: 'x is not y' E: 'x != y' Meaning: "x and y are the same object. Substituting x for y in any computation makes no difference to the result." 2. Value. Python: 'x == y' E: 'x.equals(y)' Python: 'x != y' E: '!x.equals(y)' Meaning: "x and y represent the same value. Substituting x for y in any operation that doesn't mutate x or y yields results that are ==." 3. Magnitude. Python: missing E: 'x <=> y' Python: missing E: 'x <> y' Meaning: "x and y have the same size. Another way to say this is that both x <= y and x >= y are true." Same identity implies same value; same value implies same magnitude. Category Python operators E operators identity is, is not ==, != value ==, !=, <> x.equals(y), !x.equals(y) magnitude <, <=, >, >= <, <=, >, >=, <>, <=> Each type of equality has a specific and useful meaning. Most languages, including Python, acknowledge the first two. But you can see how the coercion problem raised above is a consequence of the fact that the third category is incomplete. I like Python's spelling better than E's, though it's a small wart that there is no easy way to say or implement 'same magnitude'. (You can get around it by saying 'x <= y <= x', i suppose, but there's no real interface on the C side.) -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From esr at thyrsus.com Tue Feb 6 20:14:46 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Tue, 6 Feb 2001 14:14:46 -0500 Subject: [Python-Dev] fp vs. fd In-Reply-To: ; from ping@lfw.org on Tue, Feb 06, 2001 at 11:01:03AM -0800 References: <200102061906.f16J60x11156@snark.thyrsus.com> Message-ID: <20010206141446.A11212@thyrsus.com> Ka-Ping Yee : > On Tue, 6 Feb 2001, Eric S. Raymond wrote: > > There are a number of places in the Python library that require a > > numeric file descriptor, rather than a file object. > > I'm curious... where? See the fctl() module. I thought this was also true of select() and poll(), but I see the docs on this are different than the last time I looked and conclude that either docs or code or both have changed. -- Eric S. Raymond No one is bound to obey an unconstitutional law and no courts are bound to enforce it. -- 16 Am. Jur. Sec. 177 late 2d, Sec 256 From fredrik at effbot.org Tue Feb 6 20:24:46 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 6 Feb 2001 20:24:46 +0100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> Message-ID: <023001c09072$77da2370$e46940d5@hagrid> Paul Prescod wrote: > I'm pretty sure Fredrick agrees with the goals (probably not every > implementation detail). haven't had time to read the pep-PEP yet, but I'm pretty sure I do. more later (when I've read it). Cheers /F From ping at lfw.org Tue Feb 6 20:24:25 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 6 Feb 2001 11:24:25 -0800 (PST) Subject: [Python-Dev] Coercion and comparisons In-Reply-To: Message-ID: On Tue, 6 Feb 2001, Ka-Ping Yee wrote: > Category Python operators E operators > > identity is, is not ==, != > value ==, !=, <> x.equals(y), !x.equals(y) > magnitude <, <=, >, >= <, <=, >, >=, <>, <=> > > Each type of equality has a specific and useful meaning. Most > languages, including Python, acknowledge the first two. But you > can see how the coercion problem raised above is a consequence > of the fact that the third category is incomplete. I didn't state that last sentence very well, and the table's a bit inaccurate. Rather, it would be better to say that '==' and '!=' end up having to do double duty (sometimes for value equality, sometimes for magnitude equality) -- when really '==' doesn't belong with ordering operators like '<'. It's quite a separate concept. -- ?!ng "There's no point in being grown up if you can't be childish sometimes." -- Dr. Who From thomas at xs4all.net Tue Feb 6 20:52:53 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 20:52:53 +0100 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: ; from ping@lfw.org on Tue, Feb 06, 2001 at 08:59:04AM -0800 References: Message-ID: <20010206205253.F9551@xs4all.nl> On Tue, Feb 06, 2001 at 08:59:04AM -0800, Ka-Ping Yee wrote: > What would make for-loops easier to present, given this experience? A simpler version of for x in range(len(sequence)): obviously :) (a.k.a. 'indexing for') One that gets taught *before* 'if x in sequence', preferably. Syntax that stands out against 'x in sequence', but makes 'x in sequence' seem very logical if encountered after the first syntax. Something like for x over sequence: or for x in 0 .. sequence: (as in) for x in 1 .. 10: or for each number x in sequence: or something or other. My gut feeling says there is a sensible and clear syntax out there, but I haven't figured it out yet :) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From gvwilson at ca.baltimore.com Tue Feb 6 21:18:34 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Tue, 6 Feb 2001 15:18:34 -0500 Subject: [Python-Dev] re: for in dict / range literals In-Reply-To: <20010206205253.F9551@xs4all.nl> Message-ID: <000001c09079$fb86c550$770a0a0a@nevex.com> > > Ka-Ping Yee asked: > > What would make for-loops easier to present, given this experience? > Thomas Wouters replied: > A simpler version of > > for x in range(len(sequence)): > > obviously :) (a.k.a. 'indexing for') One that gets taught *before* 'if x in > sequence', preferably. Syntax that stands out against 'x in sequence', but > makes 'x in sequence' seem very logical if encountered after the first > syntax. Something like > > for x over sequence: > for x in 0 .. sequence: > for each number x in sequence: Greg Wilson observes: Maybe we're lucky that range literals didn't make it into the language after all (and I say this as someone who asked for them). If we were using range literals to iterate over sequences by index: for x in [0:len(seq)]: it'd be very hard to unify index-based iteration over all collection types ('cuz there's no way to write a "range literal" for the keys in a dict). I don't like "for x over sequence" --- trying to teach students that "in" means "the elements of the sequence", but "over" means "the indices of the sequence" will be hard. Something like "for x indexing sequence" would work (very hard to mistake its meaning), but what would you do for (index,value) iteration? But hey, at least we're better off than Ruby, where ".." and "..." (double or triple ellipsis) mean "up to but not including", and "up to and including" respectively. Or maybe it's the other way around :-). Greg From akuchlin at cnri.reston.va.us Tue Feb 6 21:31:29 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Tue, 6 Feb 2001 15:31:29 -0500 Subject: [Python-Dev] fp vs. fd In-Reply-To: <20010206141446.A11212@thyrsus.com>; from esr@thyrsus.com on Tue, Feb 06, 2001 at 02:14:46PM -0500 References: <200102061906.f16J60x11156@snark.thyrsus.com> <20010206141446.A11212@thyrsus.com> Message-ID: <20010206153129.B1154@thrak.cnri.reston.va.us> On Tue, Feb 06, 2001 at 02:14:46PM -0500, Eric S. Raymond wrote: >See the fctl() module. I thought this was also true of select() and >poll(), but I see the docs on this are different than the last time I >looked and conclude that either docs or code or both have changed. I think poll() and select() are happy with either an integer or an object that has a .fileno() method returning an integer, thanks to the PyObject_AsFileDescriptor() function in the C API that I added a while ago. Probably the fcntl module should also be changed to use PyObject_AsFileDescriptor() instead of requiring only an int. File a bug on SourceForge so this doesn't get forgotten before 2.1final; this is a minor tidying that's worth doing. --amk From skip at mojam.com Tue Feb 6 21:39:15 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 6 Feb 2001 14:39:15 -0600 (CST) Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <20010206205253.F9551@xs4all.nl> References: <20010206205253.F9551@xs4all.nl> Message-ID: <14976.24819.658169.82488@beluga.mojam.com> Thomas> for x in 0 .. sequence: You meant for x in 0 .. len(sequence): right? Skip From martin at loewis.home.cs.tu-berlin.de Tue Feb 6 22:00:59 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Tue, 6 Feb 2001 22:00:59 +0100 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model In-Reply-To: <3A801E49.F8DF70E2@ActiveState.com> (message from Paul Prescod on Tue, 06 Feb 2001 07:54:49 -0800) References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> <3A8013BA.2FF93E8B@lemburg.com> <3A801E49.F8DF70E2@ActiveState.com> Message-ID: <200102062100.f16L0xm01175@mira.informatik.hu-berlin.de> > If we simply allowed string objects to support higher character > numbers I *cannot see* how that could break existing code. To take a specific example: What would you change about imp and py_compile.py? What is the type of imp.get_magic()? If character string, what about this fragment? import imp MAGIC = imp.get_magic() def wr_long(f, x): """Internal; write a 32-bit int to a file in little-endian order.""" f.write(chr( x & 0xff)) f.write(chr((x >> 8) & 0xff)) f.write(chr((x >> 16) & 0xff)) f.write(chr((x >> 24) & 0xff)) ... fc = open(cfile, 'wb') fc.write('\0\0\0\0') wr_long(fc, timestamp) fc.write(MAGIC) Would that continue to write the same file that the current version writes? > We are just making life harder for ourselves by walking further and > further down one path when "everyone agrees" that we are eventually > going to end up on another path. I think a problem of discussing on a theoretical level is that the impact of changes is not clear. You seem to claim that you want changes that have zero impact on existing programs. Can you provide a patch implementing these changes, so that others can experiment and find out whether their application would break? Regards, Martin From thomas at xs4all.net Tue Feb 6 22:28:10 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 6 Feb 2001 22:28:10 +0100 Subject: [Python-Dev] re: for in dict (user expectation poll) In-Reply-To: <14976.24819.658169.82488@beluga.mojam.com>; from skip@mojam.com on Tue, Feb 06, 2001 at 02:39:15PM -0600 References: <20010206205253.F9551@xs4all.nl> <14976.24819.658169.82488@beluga.mojam.com> Message-ID: <20010206222810.N9474@xs4all.nl> On Tue, Feb 06, 2001 at 02:39:15PM -0600, Skip Montanaro wrote: > Thomas> for x in 0 .. sequence: > You meant > for x in 0 .. len(sequence): > right? Yes and no. Yes, I know '0 .. sequence' can't really work. But that doesn't mean I don't think the one without len() might be pref'rble over the other one :) They were all just examples, anyway. All this talk about syntax and what is best makes me feel like Fredrik: old and grumpy . Time-for-my-medication-;)-ly y'rs, -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From martin at loewis.home.cs.tu-berlin.de Tue Feb 6 22:50:39 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Tue, 6 Feb 2001 22:50:39 +0100 Subject: [Python-Dev] PEPS, version control, release intervals Message-ID: <200102062150.f16LodG01662@mira.informatik.hu-berlin.de> > A more critical issue might be why people haven't adopted 2.0 yet; > there seems little reason is there to continue using 1.5.2, yet I > still see questions on the XML-SIG, for example, from people who > haven't upgraded. Is it that Zope doesn't support it? Or that Red > Hat and Debian don't include it? Availability of Linux binaries is certainly an issue. On xml-sig, one Linux distributor (I forgot whether SuSE or Redhat) mentioned that they won't include 2.0 in their current major release series (7.x for both). Furthermore, the available 2.0 binaries won't work for either Redhat 7.0 nor SuSE 7.0; I think collecting binaries as we did for earlier releases is an important activity that was forgotten during 2.0. In addition, many packages are still not available for 2.0. Zope is only one of them; gtk, Qt, etc packages are still struggling with Unicode support. omniORBpy has #include in their sources, ILU does not compile on 2.0 (due to wrong tests involving the PY_MAJOR/MINOR roll-over), Fnorb falls into the select.bind parameter change pitfall. This list probably could be continued - I'm sure many of the maintainers of these packages would appreciate a helping hand from some Python Guru. Regards, Martin From akuchlin at cnri.reston.va.us Wed Feb 7 00:07:23 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Tue, 6 Feb 2001 18:07:23 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules socketmodule.c,1.135,1.136 In-Reply-To: ; from akuchling@users.sourceforge.net on Tue, Feb 06, 2001 at 02:58:07PM -0800 References: Message-ID: <20010206180723.B1269@thrak.cnri.reston.va.us> On Tue, Feb 06, 2001 at 02:58:07PM -0800, A.M. Kuchling wrote: >! if (!PyArg_ParseTuple(args, "s|i:write", &data, &len)) >! if (!PyArg_ParseTuple(args, "s#|i:write", &data, &len)) Hm... actually, this patch isn't correct after all. The |i meant you could specify an optional integer to write out only a partial chunk of the string; why not just slice it? Since the SSL code isn't documented, I'm tempted to just rip out the |i. --amk From thomas at xs4all.net Wed Feb 7 00:09:55 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 7 Feb 2001 00:09:55 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules socketmodule.c,1.135,1.136 In-Reply-To: ; from akuchling@users.sourceforge.net on Tue, Feb 06, 2001 at 02:58:07PM -0800 References: Message-ID: <20010207000955.G9551@xs4all.nl> On Tue, Feb 06, 2001 at 02:58:07PM -0800, A.M. Kuchling wrote: > Update of /cvsroot/python/python/dist/src/Modules > In directory usw-pr-cvs1:/tmp/cvs-serv21837 > Modified Files: > socketmodule.c > Log Message: > Patch #103636: Allow writing strings containing null bytes to an SSL socket > Index: socketmodule.c > =================================================================== > RCS file: /cvsroot/python/python/dist/src/Modules/socketmodule.c,v > retrieving revision 1.135 > retrieving revision 1.136 > diff -C2 -r1.135 -r1.136 > *** socketmodule.c 2001/02/02 19:55:17 1.135 > --- socketmodule.c 2001/02/06 22:58:05 1.136 > *************** > *** 2219,2223 **** > size_t len = 0; > > ! if (!PyArg_ParseTuple(args, "s|i:write", &data, &len)) > return NULL; > > --- 2219,2223 ---- > size_t len = 0; > > ! if (!PyArg_ParseTuple(args, "s#|i:write", &data, &len)) > return NULL; This doesn't seem right. The new function needs another 'length' argument (an int), and the smallest of the two should be used. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From paulp at ActiveState.com Wed Feb 7 00:21:38 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Tue, 06 Feb 2001 15:21:38 -0800 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> <3A8013BA.2FF93E8B@lemburg.com> <3A801E49.F8DF70E2@ActiveState.com> <200102062100.f16L0xm01175@mira.informatik.hu-berlin.de> Message-ID: <3A808702.5FF36669@ActiveState.com> Let me say one more thing. Unicode and string types are *already widely interoperable*. You run into problems: a) when you try to convert a character greater than 128. In my opinion this is just a poor design decision that can be easily reversed b) some code does an explicit check for types.StringType which of course is not compatible with types.UnicodeType. This can only be fixed by merging the features of types.StringType and types.UnicodeType so that they can be the same object. This is not as trivial as the other fix in terms of lines of code that must change but conceptually it doesn't seem complicated at all. I think a lot of Unicode interoperability problems would just go away if "a" was fixed... Paul Prescod From martin at loewis.home.cs.tu-berlin.de Wed Feb 7 01:00:11 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Wed, 7 Feb 2001 01:00:11 +0100 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model In-Reply-To: <3A808702.5FF36669@ActiveState.com> (message from Paul Prescod on Tue, 06 Feb 2001 15:21:38 -0800) References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> <3A8013BA.2FF93E8B@lemburg.com> <3A801E49.F8DF70E2@ActiveState.com> <200102062100.f16L0xm01175@mira.informatik.hu-berlin.de> <3A808702.5FF36669@ActiveState.com> Message-ID: <200102070000.f1700BV02437@mira.informatik.hu-berlin.de> > a) when you try to convert a character greater than 128. In my opinion > this is just a poor design decision that can be easily reversed Technically, you can easily convert expand it to 256; not that easily beyond. Then, people who put KOI8-R into their Python source code will complain why the strings come out incorrectly, even though they set their language to Russion, and even though it worked that way in earlier Python versions. Or, if they then tag their sources as KOI8-R, writing strings to a "plain" file will fail, as they have characters > 256 in the string. > I think a lot of Unicode interoperability problems would just go > away if "a" was fixed... No, that would be just open a new can of worms. Again, provide a specific patch, and I can tell you specific problems. Regards, Martin From trentm at ActiveState.com Wed Feb 7 02:32:34 2001 From: trentm at ActiveState.com (Trent Mick) Date: Tue, 6 Feb 2001 17:32:34 -0800 Subject: [Python-Dev] Quick Unix work needed In-Reply-To: <3A7AA340.B3AFF106@lemburg.com>; from mal@lemburg.com on Fri, Feb 02, 2001 at 01:08:32PM +0100 References: <3A7AA340.B3AFF106@lemburg.com> Message-ID: <20010206173234.X25935@ActiveState.com> On Fri, Feb 02, 2001 at 01:08:32PM +0100, M . -A . Lemburg wrote: > Tim Peters wrote: > > > > Trent Mick's C API testing framework has been checked in, along with > > everything needed to get it working on Windows: > > > > http://sourceforge.net/patch/?func=detailpatch&patch_id=101162& > > group_id=5470 > > > > It still needs someone to add it to the Unixish builds. > > Done. Thanks, Marc-Andre! > > > You'll know that it worked if the new std test test_capi.py succeeds. > > The test passes just fine... nothing much there which could fail ;-) Granted there aren't any really useful tests in there yet but that test_config test would have helped me when I started the Win64 port to point out that config.h had to be changed to update SIZEOF_VOID_P. Or something like that. I have some other tests in my source tree that I should be able to add sometime. We can now test some of the marshalling API (which GregS and Tim and I discussed a lot a few months back but did not completely clean up yet). Trent -- Trent Mick TrentM at ActiveState.com From paulp at ActiveState.com Wed Feb 7 03:54:08 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Tue, 06 Feb 2001 18:54:08 -0800 Subject: [Python-Dev] unichr Message-ID: <3A80B8D0.381BD92C@ActiveState.com> Does anyone have an example of real code that would break if unichr and chr were merged? chr would return a regular string if possible and a Unicode string otherwise. When the two string types are merged, there would be no need to deprecate unichr as redundant. Paul Prescod From fredrik at pythonware.com Wed Feb 7 11:00:03 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 7 Feb 2001 11:00:03 +0100 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <3A7FD69C.1708339C@lemburg.com> <3A800DBC.2BE8ECEF@ActiveState.com> <3A8013BA.2FF93E8B@lemburg.com> <3A801E49.F8DF70E2@ActiveState.com> <200102062100.f16L0xm01175@mira.informatik.hu-berlin.de> Message-ID: <00cf01c090ec$c4eb7220$0900a8c0@SPIFF> martin wrote: > To take a specific example: What would you change about imp and > py_compile.py? What is the type of imp.get_magic()? If character > string, what about this fragment? > > import imp > MAGIC = imp.get_magic() > > def wr_long(f, x): > """Internal; write a 32-bit int to a file in little-endian order.""" > f.write(chr( x & 0xff)) > f.write(chr((x >> 8) & 0xff)) > f.write(chr((x >> 16) & 0xff)) > f.write(chr((x >> 24) & 0xff)) > ... > fc = open(cfile, 'wb') > fc.write('\0\0\0\0') > wr_long(fc, timestamp) > fc.write(MAGIC) > > Would that continue to write the same file that the current version > writes? yes (file opened in binary mode, no encoding, no code points above 255) Cheers /F From nhodgson at bigpond.net.au Wed Feb 7 12:44:36 2001 From: nhodgson at bigpond.net.au (Neil Hodgson) Date: Wed, 7 Feb 2001 22:44:36 +1100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> Message-ID: <084e01c090fb$58aa9820$8119fea9@neil> [Paul Prescod discusses Unicode enhancements to Python] Another approach being pursued, mostly in Japan, is Multilingualization (M17N), http://www.m17n.org/ This is supported by the appropriate government department (MITI) and is being worked on in some open source projects, most notably Ruby. For some messages from Yukihiro Matsumoto search deja for M17N in comp.lang.ruby. Matz: "We don't believe there can be any single characer-encoding that encompasses all the world's languages. We want to handle multiple encodings at the same time (if you want to)." The approach taken in the next version of Ruby is for all string and regex objects to have an encoding attribute and for there to be infrastructure to handle operations that combine encodings. One of the things that is needed in a project that tries to fulfill the needs of large character set users is to have some of those users involved in the process. When I first saw proposals to use Unicode in products at Reuters back in 1994, it looked to me (and the proposal originators) as if it could do everything anyone ever needed. It was only after strenuous and persistant argument from the Japanese and Hong Kong offices that it became apparent that Unicode just wasn't enough. A partial solution then was to include language IDs encoded in the Private Use Area. This was still being discussed when I left but while it went some way to satisfying needs, there was still some unhappiness. If Python could cooperate with Ruby here, then not only could code be shared but Python would gain access to developers with large character set /needs/ and experience. Neil From fredrik at pythonware.com Wed Feb 7 12:58:42 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 7 Feb 2001 12:58:42 +0100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <084e01c090fb$58aa9820$8119fea9@neil> Message-ID: <01a401c090fd$5165b700$0900a8c0@SPIFF> Neil Hodgson wrote: > Matz: "We don't believe there can be any single characer-encoding that > encompasses all the world's languages. We want to handle multiple encodings > at the same time (if you want to)." neither does the unicode designers, of course: the point is that unicode only deals with glyphs, not languages. most existing japanese encodings also include language info, and if you don't understand the difference, it's easy to think that unicode sucks... I'd say we need support for *languages*, not more internal encodings. Cheers /F From mal at lemburg.com Wed Feb 7 13:23:50 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed, 07 Feb 2001 13:23:50 +0100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <084e01c090fb$58aa9820$8119fea9@neil> <01a401c090fd$5165b700$0900a8c0@SPIFF> Message-ID: <3A813E56.1EE782DD@lemburg.com> Fredrik Lundh wrote: > > Neil Hodgson wrote: > > Matz: "We don't believe there can be any single characer-encoding that > > encompasses all the world's languages. We want to handle multiple encodings > > at the same time (if you want to)." > > neither does the unicode designers, of course: the point > is that unicode only deals with glyphs, not languages. > > most existing japanese encodings also include language info, > and if you don't understand the difference, it's easy to think > that unicode sucks... > > I'd say we need support for *languages*, not more internal > encodings. >>> print "Hello World!".encode('ascii','German') Hallo Welt! Nice thought ;-) Seriously, do you think that these issues are solvable at the programming language level ? I think that the information needed to fully support language specific notations is much too complicated to go into the Python core. This should be left to applications and add-on packages to figure out. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Wed Feb 7 14:06:40 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed, 07 Feb 2001 14:06:40 +0100 Subject: [Python-Dev] PEPS, version control, release intervals References: <200102062150.f16LodG01662@mira.informatik.hu-berlin.de> Message-ID: <3A814860.69640E7C@lemburg.com> "Martin v. Loewis" wrote: > > > A more critical issue might be why people haven't adopted 2.0 yet; > > there seems little reason is there to continue using 1.5.2, yet I > > still see questions on the XML-SIG, for example, from people who > > haven't upgraded. Is it that Zope doesn't support it? Or that Red > > Hat and Debian don't include it? > > Availability of Linux binaries is certainly an issue. On xml-sig, one > Linux distributor (I forgot whether SuSE or Redhat) mentioned that > they won't include 2.0 in their current major release series (7.x for > both). > > Furthermore, the available 2.0 binaries won't work for either Redhat > 7.0 nor SuSE 7.0; I think collecting binaries as we did for earlier > releases is an important activity that was forgotten during 2.0. > > In addition, many packages are still not available for 2.0. Zope is > only one of them; gtk, Qt, etc packages are still struggling with > Unicode support. omniORBpy has #include in their > sources, ILU does not compile on 2.0 (due to wrong tests involving the > PY_MAJOR/MINOR roll-over), Fnorb falls into the select.bind parameter > change pitfall. This list probably could be continued - I'm sure many > of the maintainers of these packages would appreciate a helping hand > from some Python Guru. Does this mean that doing CORBA et al. with Python 2.0 is currently not possible ? I will have a need for this starting this summer (along with SOAP and XML), so I'd be willing to help out. Who should I contact ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Wed Feb 7 16:32:29 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed, 07 Feb 2001 16:32:29 +0100 Subject: [Python-Dev] New benchmark results (2.1a2 vs. 2.0) Message-ID: <3A816A8D.38990044@lemburg.com> I reran the benchmark I posted a couple of days ago against the current CVS tree. Here are the results (this time I double checked that both version were compiled using the same compiler settings) on my AMD K6 (I gave back the AMK K6 to Andrew :-). This time I ran the benchmark with Python in -O mode which should give better performance characteristics: PYBENCH 0.8 Benchmark: tmp/pybench-2.1a2-O.pyb (rounds=10, warp=20) Tests: per run per oper. diff * ------------------------------------------------------------------------ BuiltinFunctionCalls: 1080.60 ms 8.48 us +7.91% BuiltinMethodLookup: 1185.60 ms 2.26 us +47.86% ConcatStrings: 1157.75 ms 7.72 us +10.03% ConcatUnicode: 1398.80 ms 9.33 us +8.76% CreateInstances: 1694.30 ms 40.34 us +12.08% CreateStringsWithConcat: 1393.90 ms 6.97 us +9.75% CreateUnicodeWithConcat: 1487.90 ms 7.44 us +7.81% DictCreation: 1794.45 ms 11.96 us +4.22% DictWithFloatKeys: 2102.75 ms 3.50 us +18.03% DictWithIntegerKeys: 1107.80 ms 1.85 us +13.33% DictWithStringKeys: 892.80 ms 1.49 us -2.39% ForLoops: 1145.95 ms 114.59 us -0.00% IfThenElse: 1229.60 ms 1.82 us +15.67% ListSlicing: 551.75 ms 157.64 us +2.23% NestedForLoops: 649.65 ms 1.86 us -0.60% NormalClassAttribute: 1253.35 ms 2.09 us +29.57% NormalInstanceAttribute: 1394.25 ms 2.32 us +51.52% PythonFunctionCalls: 942.45 ms 5.71 us -10.22% PythonMethodCalls: 975.30 ms 13.00 us +14.33% Recursion: 770.35 ms 61.63 us -0.42% SecondImport: 855.50 ms 34.22 us -1.37% SecondPackageImport: 869.40 ms 34.78 us -2.56% SecondSubmoduleImport: 1075.40 ms 43.02 us -3.93% SimpleComplexArithmetic: 1632.95 ms 7.42 us +7.04% SimpleDictManipulation: 1018.15 ms 3.39 us +11.44% SimpleFloatArithmetic: 782.25 ms 1.42 us +0.49% SimpleIntFloatArithmetic: 770.70 ms 1.17 us +0.93% SimpleIntegerArithmetic: 769.85 ms 1.17 us +0.82% SimpleListManipulation: 1097.35 ms 4.06 us +13.16% SimpleLongArithmetic: 1274.80 ms 7.73 us +8.27% SmallLists: 1982.30 ms 7.77 us +5.20% SmallTuples: 1259.90 ms 5.25 us +3.87% SpecialClassAttribute: 1265.35 ms 2.11 us +33.74% SpecialInstanceAttribute: 1694.35 ms 2.82 us +51.38% StringMappings: 1483.15 ms 11.77 us +8.04% StringPredicates: 1205.05 ms 4.30 us -4.89% StringSlicing: 1158.00 ms 6.62 us +12.65% TryExcept: 1128.70 ms 0.75 us -1.22% TryRaiseExcept: 1199.50 ms 79.97 us +6.45% TupleSlicing: 971.40 ms 9.25 us +10.99% UnicodeMappings: 1111.15 ms 61.73 us -2.04% UnicodePredicates: 1307.20 ms 5.81 us -7.54% UnicodeProperties: 1228.05 ms 6.14 us +8.81% UnicodeSlicing: 1032.95 ms 5.90 us -7.52% ------------------------------------------------------------------------ Average round time: 59476.00 ms +6.18% *) measured against: tmp/pybench-2.0-O.pyb (rounds=10, warp=20) The version 0.8 pybench archive can be downloaded from: http://www.lemburg.com/python/pybench-0.8.zip It includes two new test for special dictionary keys. What's interesting here is that attribute lookups seem to have suffered (I consider figures above ~10% to be significant) while Python function calls got faster. The new dictionary key tests nicely show the effect of the string optimization compared to the standard lookup scheme which applies lots of error checking. OTOH, it is surprising that attribute lookup got a slowdown since these normally are string lookups in dictionaries... -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From ping at lfw.org Wed Feb 7 17:12:33 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Wed, 7 Feb 2001 08:12:33 -0800 (PST) Subject: [Python-Dev] unichr In-Reply-To: <3A80B8D0.381BD92C@ActiveState.com> Message-ID: On Tue, 6 Feb 2001, Paul Prescod wrote: > Does anyone have an example of real code that would break if unichr and > chr were merged? chr would return a regular string if possible and a > Unicode string otherwise. When the two string types are merged, there > would be no need to deprecate unichr as redundant. At the moment, since the default encoding is ASCII, something like u"abc" + chr(200) would cause an exception because 200 is outside of the ASCII range. So if unichr and chr were merged right now as you suggest, u"abc" + unichr(200) would break: unichr(200) would have to return '\xc8' (not u'\xc8') for compatibility with chr(200), yet the concatenation would fail. You can see that any argument from 128 to 255 would cause this problem, since it would be outside the definitely-8-bit range and also outside the definitely-Unicode range. -- ?!ng From guido at digicool.com Wed Feb 7 08:39:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 07 Feb 2001 02:39:11 -0500 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: Your message of "Tue, 06 Feb 2001 10:48:15 +0200." <20010206084815.E63E5A840@darjeeling.zadka.site.co.il> References: <20010205170340.A3101@thyrsus.com>, <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010206084815.E63E5A840@darjeeling.zadka.site.co.il> Message-ID: <200102070739.CAA07014@cj20424-a.reston1.va.home.com> > That's how woody works now, and the binaries are called python and python2. The binaries should be called python1.5 and python2.0, and python should be a symlink to whatever is the default one. This is how the standard "make install" works, and it makes it possible for scripts to require a specific version by specifying e.g. #! /usr/bin/env python1.5 at the top. --Guido van Rossum (home page: http://www.python.org/~guido/) From moshez at zadka.site.co.il Wed Feb 7 20:54:42 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Wed, 7 Feb 2001 21:54:42 +0200 (IST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <200102070739.CAA07014@cj20424-a.reston1.va.home.com> References: <200102070739.CAA07014@cj20424-a.reston1.va.home.com>, <20010205170340.A3101@thyrsus.com>, <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010206084815.E63E5A840@darjeeling.zadka.site.co.il> Message-ID: <20010207195442.290E2A840@darjeeling.zadka.site.co.il> On Wed, 07 Feb 2001 02:39:11 -0500, Guido van Rossum wrote: > The binaries should be called python1.5 and python2.0, and python > should be a symlink to whatever is the default one. No they shouldn't. Joey Hess wrote to debian-python about the problems such a scheme caused when Perl5.005 and Perl 5.6 tried to coexist. -- For public key: finger moshez at debian.org | gpg --import Debian - All the power, without the silly hat. From shaleh at valinux.com Wed Feb 7 21:03:57 2001 From: shaleh at valinux.com (Sean 'Shaleh' Perry) Date: Wed, 07 Feb 2001 12:03:57 -0800 (PST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010207195442.290E2A840@darjeeling.zadka.site.co.il> Message-ID: On 07-Feb-2001 Moshe Zadka wrote: > On Wed, 07 Feb 2001 02:39:11 -0500, Guido van Rossum > wrote: >> The binaries should be called python1.5 and python2.0, and python >> should be a symlink to whatever is the default one. > > No they shouldn't. Joey Hess wrote to debian-python about the problems > such a scheme caused when Perl5.005 and Perl 5.6 tried to coexist. Guido, the problem lies in we have no default. The user may install only 2.x or 1.5. Scripts that handle the symlink can fail and then the user is left without a python. In the case where only one is installed, this is easy. however in a packaged system where any number of pythons could exist, problems arise. Now, the problem with perl was a bad one because the thing in charge of the symlink was itself a perl script. From bsass at freenet.edmonton.ab.ca Wed Feb 7 21:10:38 2001 From: bsass at freenet.edmonton.ab.ca (Bruce Sass) Date: Wed, 7 Feb 2001 13:10:38 -0700 (MST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010207195442.290E2A840@darjeeling.zadka.site.co.il> Message-ID: On Wed, 7 Feb 2001, Moshe Zadka wrote: > On Wed, 07 Feb 2001 02:39:11 -0500, Guido van Rossum wrote: > > The binaries should be called python1.5 and python2.0, and python > > should be a symlink to whatever is the default one. > > No they shouldn't. Joey Hess wrote to debian-python about the problems > such a scheme caused when Perl5.005 and Perl 5.6 tried to coexist. Maybe that needs to be explained again, in real simple terms. My understanding is that it was a problem with the programs not properly identifying which version of Perl they need, if any. - Bruce From guido at digicool.com Wed Feb 7 09:36:56 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 07 Feb 2001 03:36:56 -0500 Subject: [Python-Dev] fp vs. fd In-Reply-To: Your message of "Tue, 06 Feb 2001 14:06:00 EST." <200102061906.f16J60x11156@snark.thyrsus.com> References: <200102061906.f16J60x11156@snark.thyrsus.com> Message-ID: <200102070836.DAA08865@cj20424-a.reston1.va.home.com> > There are a number of places in the Python library that require a > numeric file descriptor, rather than a file object. This complicates > code slightly and (IMO) breaches the wrapper around the file-object > abstraction (which Guido says is only supposed to depend on > stdio-level stuff). > > Are there design reasons for this, or is it historical accident? > > If the latter, I'll go through and fix these to accept either an fd > or an fp. And fix the docs, too. I don't see why this violates abstraction. Take e.g. select. Sometimes you have opened a low-level filedescriptor, e.g. with os.open() or os.pipe(). So it clearly must take an integer fd. Sometimes you have an object at hand that has a fileno() method, e.g. a socket. It would be a waste of time to have to maintain a mapping from integer fd to object in the app, so it's useful to take an object with a fileno() method. There's no problem with knowing that on some (most) platforms, standard files have an underlying implementation using integer fds, and using this in some apps. That's not to say that Python should offer standar APIs that *require* having such an implementation. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Wed Feb 7 09:41:47 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 07 Feb 2001 03:41:47 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules socketmodule.c,1.135,1.136 In-Reply-To: Your message of "Tue, 06 Feb 2001 18:07:23 EST." <20010206180723.B1269@thrak.cnri.reston.va.us> References: <20010206180723.B1269@thrak.cnri.reston.va.us> Message-ID: <200102070841.DAA08929@cj20424-a.reston1.va.home.com> > On Tue, Feb 06, 2001 at 02:58:07PM -0800, A.M. Kuchling wrote: > >! if (!PyArg_ParseTuple(args, "s|i:write", &data, &len)) > >! if (!PyArg_ParseTuple(args, "s#|i:write", &data, &len)) > > Hm... actually, this patch isn't correct after all. The |i meant you > could specify an optional integer to write out only a partial chunk of > the string; why not just slice it? Since the SSL code isn't > documented, I'm tempted to just rip out the |i. Yes, rip it out. The old API was poorly designed, and let you do bad things (e.g. pass a length much larger than len(s)). --Guido van Rossum (home page: http://www.python.org/~guido/) From paulp at ActiveState.com Wed Feb 7 21:49:15 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Wed, 07 Feb 2001 12:49:15 -0800 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <084e01c090fb$58aa9820$8119fea9@neil> Message-ID: <3A81B4CB.DDA4E304@ActiveState.com> Neil Hodgson wrote: > > ... > > Matz: "We don't believe there can be any single characer-encoding that > encompasses all the world's languages. We want to handle multiple encodings > at the same time (if you want to)." > > The approach taken in the next version of Ruby is for all string and > regex objects to have an encoding attribute and for there to be > infrastructure to handle operations that combine encodings. I think Python should support as many encodings as people invent. Conceptually it doesn't cost me anything, but I'll leave the implementation to you. :) But an encoding is only a way of *representing a character in memory or on disk*. Asking for Python to support multiple encodings in memory is like asking for it to support both two's complement and one's complement long integers. Multiple encodings can be only interesting as a performance issue because the encoding of memory is *transparent* to the *Python programmer*. We could support a thousand encodings internally but a Python programmer should never know or care which one they are dealing with. Which leads me to ask "what's the point"? Would the small performance gains be worth it? > One of the things that is needed in a project that tries to fulfill the > needs of large character set users is to have some of those users involved > in the process. When I first saw proposals to use Unicode in products at > Reuters back in 1994, it looked to me (and the proposal originators) as if > it could do everything anyone ever needed. It was only after strenuous and > persistant argument from the Japanese and Hong Kong offices that it became > apparent that Unicode just wasn't enough. A partial solution then was to > include language IDs encoded in the Private Use Area. This was still being > discussed when I left but while it went some way to satisfying needs, there > was still some unhappiness. I think that Unicode has changed quite a bit since 1994. Nevertheless, language IDs is a fine solution. Unicode is not about distinguishing between languages -- only characters. There is no better "non-Unicode" solution that I've ever heard of. > If Python could cooperate with Ruby here, then not only could code be > shared but Python would gain access to developers with large character set > /needs/ and experience. I don't see how we could meaningfully cooperate on such a core language issue. We could of course share codecs but that has nothing to do with Python's internal representation. Paul Prescod From akuchlin at cnri.reston.va.us Wed Feb 7 22:00:02 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Wed, 7 Feb 2001 16:00:02 -0500 Subject: [Python-Dev] Pre-PEP: Python Character Model In-Reply-To: <3A81B4CB.DDA4E304@ActiveState.com>; from paulp@ActiveState.com on Wed, Feb 07, 2001 at 12:49:15PM -0800 References: <3A7F9084.509510B8@ActiveState.com> <084e01c090fb$58aa9820$8119fea9@neil> <3A81B4CB.DDA4E304@ActiveState.com> Message-ID: <20010207160002.A2123@thrak.cnri.reston.va.us> On Wed, Feb 07, 2001 at 12:49:15PM -0800, Paul Prescod quoted: >> The approach taken in the next version of Ruby is for all string and >> regex objects to have an encoding attribute and for there to be >> infrastructure to handle operations that combine encodings. Any idea if this next version of Ruby is available in its current state, or if it's vaporware? It might be worth looking at what exactly it implements, but I wonder if this is just Matz's idea and he hasn't yet tried implementing it. >We could support a thousand encodings internally but a Python programmer >should never know or care which one they are dealing with. Which leads >me to ask "what's the point"? Would the small performance gains be worth >it? I'd worry that implementing a regex engine for multiple encodings would be impossible or, if possible, it would be quite slow because you'd need to abstract every single character retrieval into a function call that decodes a single character for a given encoding. Massive surgery was required to make Perl handle UTF-8, for example, and I don't know that Perl's engine is actually fully operational with UTF-8 yet. --amk From nhodgson at bigpond.net.au Wed Feb 7 22:37:18 2001 From: nhodgson at bigpond.net.au (Neil Hodgson) Date: Thu, 8 Feb 2001 08:37:18 +1100 Subject: [Python-Dev] Pre-PEP: Python Character Model References: <3A7F9084.509510B8@ActiveState.com> <084e01c090fb$58aa9820$8119fea9@neil> <3A81B4CB.DDA4E304@ActiveState.com> <20010207160002.A2123@thrak.cnri.reston.va.us> Message-ID: <03cd01c0914e$30aa7d10$8119fea9@neil> Andrew Kuchling: > Any idea if this next version of Ruby is available in its current > state, or if it's vaporware? It might be worth looking at what > exactly it implements, but I wonder if this is just Matz's idea and he > hasn't yet tried implementing it. AFAIK, 1.7 is still vaporware although the impression that I got was this was being implemented by Matz when he mentioned it in mid December. Some code may be available from CVS but I haven't been following that closely. > I'd worry that implementing a regex engine for multiple encodings > would be impossible or, if possible, it would be quite slow because > you'd need to abstract every single character retrieval into a > function call that decodes a single character for a given encoding. I'd guess at some sort of type promotion system with caching to avoid extra conversions. Say you want to search a Shift-JIS string for a KOI8 string (unlikely but they do share many characters). The infrastructure checks the character sets representable in the encodings and chooses a super-type that can include all possibilities in the expression, then promotes both arguments by reencoding and performs the operation. The super-type would likely be Unicode based although given Matz' desire for larger-than-Unicode character sets, it may be something else. Neil From andy at reportlab.com Thu Feb 8 00:06:12 2001 From: andy at reportlab.com (Andy Robinson) Date: Wed, 7 Feb 2001 23:06:12 -0000 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model In-Reply-To: <3A801E49.F8DF70E2@ActiveState.com> Message-ID: > The last time we went around there was an anti-Unicode faction who > argued that adding Unicode support was fine but making it > the default would inconvenience Japanese users. Whoops, I nearly missed the biggest debate of the year! I guess the faction was Brian and I, and our concerns were misunderstood. We can lay this to rest forever now as the current implementation and forward direction incorporate everything I originally hoped for: (1) Frequently you need to work with byte arrays, but need a rich bunch of string-like routines - search and replace, regex etc. This applies both to non-natural-language data and also to the special case of corrupt native encodings that need repair. We loosely defined the 'string interface' in UserString, so that other people could define string-like types if they wished and so that users can expect to find certain methods and operations in both Unicode and Byte Array types. I'd be really happy one day to explicitly type x= ByteArray('some raw data') as long as I had my old friends split, join, find etc. (2) Japanese projects often need small extensions to codecs to deal with user-defined characters. Java and VB give you some canned codecs but no way to extend them. All the Python asian codec drafts involve 'open' code you can hack and use simple dictionaries for mapping tables; so it will be really easy to roll your own "Shift-JIS-plus" with 20 extra characters mapping to a private use area. This will be a huge win over other languages. (3) The Unicode conversion was based on a more general notion of 'stream conversion filters' which work with bytes. This leaves the door open to writing, for example, a direct Shift-JIS-to-EUC filter which adds nothing in the case of clean data but is much more robust in the case of user-defined characters or which can handle cleanup of misencoded data. We could also write image manipulation or crypto codecs. Some of us hope to provide general machinery for fast handling of byte-stream-filters which could be useful in image processing and crypto as well as encodings. This might need an extended or different lookup function (after all, neither end of the filter need be Unicode) but could be cleanly layered on top of the codec mechanism we have built in. (4) I agree 100% on being explicit whenever you do I/O or conversion and on generally using Unicode characters where possible. Defaults are evil. But we needed a compatibility route to get there. Guido has said that long term there will be Unicode strings and Byte Arrays. That's the time to require arguments to open(). > Similarly, we could improve socket objects so that they > have different > readtext/readbinary and writetext/writebinary without unifying the > string objects. There are lots of small changes we can make without > breaking anything. One I would like to see right now is a > unification of > chr() and unichr(). Here's a thought. How about BinaryFile/BinarySocket/ByteArray which do not need an encoding, and File/Socket/String which require explicit encodings on opeening. We keep broad parity between their methods. That seems more straightforward to me than having text/binary methods, and also provides a cleaner upgrade path for existing code. - Andy From skip at mojam.com Thu Feb 8 00:07:16 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 7 Feb 2001 17:07:16 -0600 (CST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010207195442.290E2A840@darjeeling.zadka.site.co.il> References: <200102070739.CAA07014@cj20424-a.reston1.va.home.com> <20010205170340.A3101@thyrsus.com> <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010206084815.E63E5A840@darjeeling.zadka.site.co.il> <20010207195442.290E2A840@darjeeling.zadka.site.co.il> Message-ID: <14977.54564.430670.260975@beluga.mojam.com> Moshe> No they shouldn't. Joey Hess wrote to debian-python about the Moshe> problems such a scheme caused when Perl5.005 and Perl 5.6 tried Moshe> to coexist. Can you summarize or post that message here? I've never had a problem with the scheme that Python currently uses aside from occasionally having the redirect the python symlink after an install. Skip From martin at loewis.home.cs.tu-berlin.de Thu Feb 8 01:06:41 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Thu, 8 Feb 2001 01:06:41 +0100 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <3A814860.69640E7C@lemburg.com> (mal@lemburg.com) References: <200102062150.f16LodG01662@mira.informatik.hu-berlin.de> <3A814860.69640E7C@lemburg.com> Message-ID: <200102080006.f1806fj01504@mira.informatik.hu-berlin.de> > Does this mean that doing CORBA et al. with Python 2.0 is > currently not possible ? It is possible; people have posted patches to Fnorb (which only add tuples in the right places). Also, the omniORB CVS cooperates with Python 2.0. There just is nothing that's officially released. > I will have a need for this starting this summer (along with SOAP > and XML), so I'd be willing to help out. Who should I contact ? Depends on what you want to take as a starting point. For Fnorb, it would be DSTC, although it appears to be "officially unmaintained" for the moment. For omniORB, contact Duncan Grisby - he's usually quite responsive. For ILU, it would be Bill Janssen; I'm sure he'll accept patches. If you need something in a commercial environment (i.e. where purchasing licenses is not an issue), feel free to contact me in private :-) In general, the DO SIG (do-sig at python.org) is a good place to discuss both CORBA and SOAP. Regards, Martin From sdm7g at virginia.edu Thu Feb 8 05:31:50 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Wed, 7 Feb 2001 23:31:50 -0500 (EST) Subject: [Python-Dev] more 2.1a2 macosx build problems Message-ID: Is anyone else tracking builds on macosx ? A bug I reported [#131170] on the 2.1a2 release has been growing more heads... Initial problem: make install fails as it tries to run ranlib on a shared library: ranlib: file: /usr/local/lib/python2.1/config/libpython2.1.dylib is not an archive commented out that line in the makefile: @if test -d $(LDLIBRARY); then :; else \ $(INSTALL_DATA) $(LDLIBRARY) $(LIBPL)/$(LDLIBRARY) ; \ # $(RANLIB) $(LIBPL)/$(LDLIBRARY) ; \ make and install seem to work, however, if you run python from somewhere other than the build directory, you get a fatal error: dyld: python2.1 can't open library: libpython2.1.dylib (No such file or directory, errno = 2) looking at executable with 'otool -L' shows that while system frameworks have their complete pathnames, libpython2.1.dylib has no path, so it's expected to be in the current directory. Added "-install_name $(LIBPL)/$(LDLIBRARY)" to the libtool command to tell it that it will be installed somewhere other than the current build directory. 'make' fails on setup when python can't find os module. Investigating that, it looks like sys.path is all confused. Looking at Modules/getpath.c, it looks like the WITH_NEXT_FRAMEWORK conditional code is getting the path from the shared library and not the executable. -- Steve Majewski From tim_one at email.msn.com Thu Feb 8 06:24:41 2001 From: tim_one at email.msn.com (Tim Peters) Date: Thu, 8 Feb 2001 00:24:41 -0500 Subject: [Python-Dev] fp vs. fd In-Reply-To: Message-ID: [Eric S. Raymond] > There are a number of places in the Python library that require a > numeric file descriptor, rather than a file object. [Ka-Ping Yee] > I'm curious... where? mmap.mmap(fileno, ...) for me most recently, where, usually, it's simply annoying. fresh-on-my-mind-ly y'rs - tim From uche.ogbuji at fourthought.com Thu Feb 8 08:21:55 2001 From: uche.ogbuji at fourthought.com (Uche Ogbuji) Date: Thu, 08 Feb 2001 00:21:55 -0700 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: Message from "Martin v. Loewis" of "Tue, 06 Feb 2001 22:50:39 +0100." <200102062150.f16LodG01662@mira.informatik.hu-berlin.de> Message-ID: <200102080721.AAA26782@localhost.localdomain> > Availability of Linux binaries is certainly an issue. On xml-sig, one > Linux distributor (I forgot whether SuSE or Redhat) mentioned that > they won't include 2.0 in their current major release series (7.x for > both). 'Twas Red Hat. However, others claim to have spotted Python 2.0 in Rawhide and supposedly both versions might be included until 8.0. > In addition, many packages are still not available for 2.0. Zope is > only one of them; gtk, Qt, etc packages are still struggling with > Unicode support. omniORBpy has #include in their > sources, I hadn't noticed this. OmniORBpy compiles and runs just fine for me using Python 2.0 and 2.1a2, except that it throws BAD_PARAM when passed Unicode objects in place of strings. -- Uche Ogbuji Principal Consultant uche.ogbuji at fourthought.com +1 303 583 9900 x 101 Fourthought, Inc. http://Fourthought.com 4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA Software-engineering, knowledge-management, XML, CORBA, Linux, Python From uche.ogbuji at fourthought.com Thu Feb 8 08:26:25 2001 From: uche.ogbuji at fourthought.com (Uche Ogbuji) Date: Thu, 08 Feb 2001 00:26:25 -0700 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: Message from "M.-A. Lemburg" of "Wed, 07 Feb 2001 14:06:40 +0100." <3A814860.69640E7C@lemburg.com> Message-ID: <200102080726.AAA27240@localhost.localdomain> > Does this mean that doing CORBA et al. with Python 2.0 is > currently not possible ? > > I will have a need for this starting this summer (along with SOAP > and XML), so I'd be willing to help out. Who should I contact ? No. You can use OmniORBpy as long as you're careful not to mix your strings with your unicode objects. I don't know the tale of SOAP. soaplib seems stuck at 0.8. Not that I blame anyone: the experience of hacking a subset of SOAP into 4Suite Server left me in a bad mood for days. Someone was tanked when they came up with that. XML is rather an odd man out in your list. Do you mean custom XML over HTTP or somesuch? -- Uche Ogbuji Principal Consultant uche.ogbuji at fourthought.com +1 303 583 9900 x 101 Fourthought, Inc. http://Fourthought.com 4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA Software-engineering, knowledge-management, XML, CORBA, Linux, Python From mal at lemburg.com Thu Feb 8 12:35:22 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 08 Feb 2001 12:35:22 +0100 Subject: [Python-Dev] PEPS, version control, release intervals References: <200102080726.AAA27240@localhost.localdomain> Message-ID: <3A82847A.14496A01@lemburg.com> Uche Ogbuji wrote: > > > Does this mean that doing CORBA et al. with Python 2.0 is > > currently not possible ? > > > > I will have a need for this starting this summer (along with SOAP > > and XML), so I'd be willing to help out. Who should I contact ? > > No. You can use OmniORBpy as long as you're careful not to mix your strings > with your unicode objects. Good news :-) Thanks. > I don't know the tale of SOAP. soaplib seems stuck at 0.8. Not that I blame > anyone: the experience of hacking a subset of SOAP into 4Suite Server left me > in a bad mood for days. Someone was tanked when they came up with that. > > XML is rather an odd man out in your list. Do you mean custom XML over HTTP > or somesuch? Well, for one SOAP is XML-based and I am planning to add full XML support to our application server this summer (still waiting for the dust to settle :-). The reason for trying to support SOAP is that some very important legacy system vendors (e.g. SAP) are moving into this direction. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Thu Feb 8 13:53:57 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 08 Feb 2001 13:53:57 +0100 Subject: [Python-Dev] PEPS, version control, release intervals References: <200102062150.f16LodG01662@mira.informatik.hu-berlin.de> <3A814860.69640E7C@lemburg.com> <200102080006.f1806fj01504@mira.informatik.hu-berlin.de> Message-ID: <3A8296E5.C7853746@lemburg.com> "Martin v. Loewis" wrote: > > > Does this mean that doing CORBA et al. with Python 2.0 is > > currently not possible ? > > It is possible; people have posted patches to Fnorb (which only add > tuples in the right places). Also, the omniORB CVS cooperates with > Python 2.0. There just is nothing that's officially released. Looks like this is another issue with the current pace at which Python releases appear. I am starting to get these problems too with my mx tools: people download the wrong version and then find that the tools don't work with their installed version of Python (on Windows that is). Luckily, distutils makes this easier to handle, but many of the tools out there still don't use it. > > I will have a need for this starting this summer (along with SOAP > > and XML), so I'd be willing to help out. Who should I contact ? > > Depends on what you want to take as a starting point. For Fnorb, it > would be DSTC, although it appears to be "officially unmaintained" for > the moment. For omniORB, contact Duncan Grisby - he's usually quite > responsive. For ILU, it would be Bill Janssen; I'm sure he'll accept > patches. If you need something in a commercial environment (i.e. where > purchasing licenses is not an issue), feel free to contact me in > private :-) Depends on the licensing costs, but yes, this is for a commercial product ;-) > In general, the DO SIG (do-sig at python.org) is a good place to discuss > both CORBA and SOAP. Thank you for the details. I'll sign up to that SIG as well (that should get me to 300 emails a day :-/). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From Barrett at stsci.edu Thu Feb 8 23:43:37 2001 From: Barrett at stsci.edu (Paul Barrett) Date: Thu, 8 Feb 2001 17:43:37 -0500 (EST) Subject: [Python-Dev] PEP 209: Multi-dimensional Arrays Message-ID: <14979.7675.800077.147879@nem-srvr.stsci.edu> The first draft of PEP 209: Multi-dimensional Arrays is ready for comment. It's primary emphasis is aimed at array operations, but its design is intended to provide a general framework for working with multi-dimensional arrays. This PEP covers a lot of ground and so does not go into much detail at this stage. The hope is that we can fill them in as time goes on. It also presents several Open Issues that need to be discussed. Cheers, Paul P.S. - Sorry Paul (Dubois). We couldn't wait any longer. -- Dr. Paul Barrett Space Telescope Science Institute Phone: 410-338-4475 ESS/Science Software Group FAX: 410-338-4767 Baltimore, MD 21218 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PEP: 209 Title: Multi-dimensional Arrays Version: Author: barrett at stsci.edu (Paul Barrett), oliphant at ee.byu.edu (Travis Oliphant) Python-Version: 2.2 Status: Draft Type: Standards Track Created: 03-Jan-2001 Post-History: Abstract This PEP proposes a redesign and re-implementation of the multi- dimensional array module, Numeric, to make it easier to add new features and functionality to the module. Aspects of Numeric 2 that will receive special attention are efficient access to arrays exceeding a gigabyte in size and composed of inhomogeneous data structures or records. The proposed design uses four Python classes: ArrayType, UFunc, Array, and ArrayView; and a low-level C-extension module, _ufunc, to handle the array operations efficiently. In addition, each array type has its own C-extension module which defines the coercion rules, operations, and methods for that type. This design enables new types, features, and functionality to be added in a modular fashion. The new version will introduce some incompatibilities with the current Numeric. Motivation Multi-dimensional arrays are commonly used to store and manipulate data in science, engineering, and computing. Python currently has an extension module, named Numeric (henceforth called Numeric 1), which provides a satisfactory set of functionality for users manipulating homogeneous arrays of data of moderate size (of order 10 MB). For access to larger arrays (of order 100 MB or more) of possibly inhomogeneous data, the implementation of Numeric 1 is inefficient and cumbersome. In the future, requests by the Numerical Python community for additional functionality is also likely as PEPs 211: Adding New Linear Operators to Python, and 225: Elementwise/Objectwise Operators illustrate. Proposal This proposal recommends a re-design and re-implementation of Numeric 1, henceforth called Numeric 2, which will enable new types, features, and functionality to be added in an easy and modular manner. The initial design of Numeric 2 should focus on providing a generic framework for manipulating arrays of various types and should enable a straightforward mechanism for adding new array types and UFuncs. Functional methods that are more specific to various disciplines can then be layered on top of this core. This new module will still be called Numeric and most of the behavior found in Numeric 1 will be preserved. The proposed design uses four Python classes: ArrayType, UFunc, Array, and ArrayView; and a low-level C-extension module to handle the array operations efficiently. In addition, each array type has its own C-extension module which defines the coercion rules, operations, and methods for that type. At a later date, when core functionality is stable, some Python classes can be converted to C-extension types. Some planned features are: 1. Improved memory usage This feature is particularly important when handling large arrays and can produce significant improvements in performance as well as memory usage. We have identified several areas where memory usage can be improved: a. Use a local coercion model Instead of using Python's global coercion model which creates temporary arrays, Numeric 2, like Numeric 1, will implement a local coercion model as described in PEP 208 which defers the responsibility of coercion to the operator. By using internal buffers, a coercion operation can be done for each array (including output arrays), if necessary, at the time of the operation. Benchmarks [1] have shown that performance is at most degraded only slightly and is improved in cases where the internal buffers are less than the L2 cache size and the processor is under load. To avoid array coercion altogether, C functions having arguments of mixed type are allowed in Numeric 2. b. Avoid creation of temporary arrays In complex array expressions (i.e. having more than one operation), each operation will create a temporary array which will be used and then deleted by the succeeding operation. A better approach would be to identify these temporary arrays and reuse their data buffers when possible, namely when the array shape and type are the same as the temporary array being created. This can be done by checking the temparory array's reference count. If it is 1, then it will be deleted once the operation is done and is a candidate for reuse. c. Optional use of memory-mapped files Numeric users sometimes need to access data from very large files or to handle data that is greater than the available memory. Memory-mapped arrays provide a mechanism to do this by storing the data on disk while making it appear to be in memory. Memory- mapped arrays should improve access to all files by eliminating one of two copy steps during a file access. Numeric should be able to access in-memory and memory-mapped arrays transparently. d. Record access In some fields of science, data is stored in files as binary records. For example in astronomy, photon data is stored as a 1 dimensional list of photons in order of arrival time. These records or C-like structures contain information about the detected photon, such as its arrival time, its position on the detector, and its energy. Each field may be of a different type, such as char, int, or float. Such arrays introduce new issues that must be dealt with, in particular byte alignment or byte swapping may need to be performed for the numeric values to be properly accessed (though byte swapping is also an issue for memory mapped data). Numeric 2 is designed to automatically handle alignment and representational issues when data is accessed or operated on. There are two approaches to implementing records; as either a derived array class or a special array type, depending on your point-of- view. We defer this discussion to the Open Issues section. 2. Additional array types Numeric 1 has 11 defined types: char, ubyte, sbyte, short, int, long, float, double, cfloat, cdouble, and object. There are no ushort, uint, or ulong types, nor are there more complex types such as a bit type which is of use to some fields of science and possibly for implementing masked-arrays. The design of Numeric 1 makes the addition of these and other types a difficult and error-prone process. To enable the easy addition (and deletion) of new array types such as a bit type described below, a re-design of Numeric is necessary. a. Bit type The result of a rich comparison between arrays is an array of boolean values. The result can be stored in an array of type char, but this is an unnecessary waste of memory. A better implementation would use a bit or boolean type, compressing the array size by a factor of eight. This is currently being implemented for Numeric 1 (by Travis Oliphant) and should be included in Numeric 2. 3. Enhanced array indexing syntax The extended slicing syntax was added to Python to provide greater flexibility when manipulating Numeric arrays by allowing step-sizes greater than 1. This syntax works well as a shorthand for a list of regularly spaced indices. For those situations where a list of irregularly spaced indices are needed, an enhanced array indexing syntax would allow 1-D arrays to be arguments. 4. Rich comparisons The implementation of PEP 207: Rich Comparisons in Python 2.1 provides additional flexibility when manipulating arrays. We intend to implement this feature in Numeric 2. 5. Array broadcasting rules When an operation between a scalar and an array is done, the implied behavior is to create a new array having the same shape as the array operand containing the scalar value. This is called array broadcasting. It also works with arrays of lesser rank, such as vectors. This implicit behavior is implemented in Numeric 1 and will also be implemented in Numeric 2. Design and Implementation The design of Numeric 2 has four primary classes: 1. ArrayType: This is a simple class that describes the fundamental properties of an array-type, e.g. its name, its size in bytes, its coercion relations with respect to other types, etc., e.g. > Int32 = ArrayType('Int32', 4, 'doc-string') Its relation to the other types is defined when the C-extension module for that type is imported. The corresponding Python code is: > Int32.astype[Real64] = Real64 This says that the Real64 array-type has higher priority than the Int32 array-type. The following attributes and methods are proposed for the core implementation. Additional attributes can be added on an individual basis, e.g. .bitsize or .bitstrides for the bit type. Attributes: .name: e.g. "Int32", "Float64", etc. .typecode: e.g. 'i', 'f', etc. (for backward compatibility) .size (in bytes): e.g. 4, 8, etc. .array_rules (mapping): rules between array types .pyobj_rules (mapping): rules between array and python types .doc: documentation string Methods: __init__(): initialization __del__(): destruction __repr__(): representation C-API: This still needs to be fleshed-out. 2. UFunc: This class is the heart of Numeric 2. Its design is similar to that of ArrayType in that the UFunc creates a singleton callable object whose attributes are name, total and input number of arguments, a document string, and an empty CFunc dictionary; e.g. > add = UFunc('add', 3, 2, 'doc-string') When defined the add instance has no C functions associated with it and therefore can do no work. The CFunc dictionary is populated or registerd later when the C-extension module for an array-type is imported. The arguments of the regiser method are: function name, function descriptor, and the CUFunc object. The corresponding Python code is > add.register('add', (Int32, Int32, Int32), cfunc-add) In the initialization function of an array type module, e.g. Int32, there are two C API functions: one to initialize the coercion rules and the other to register the CFunc objects. When an operation is applied to some arrays, the __call__ method is invoked. It gets the type of each array (if the output array is not given, it is created from the coercion rules) and checks the CFunc dictionary for a key that matches the argument types. If it exists the operation is performed immediately, otherwise the coercion rules are used to search for a related operation and set of conversion functions. The __call__ method then invokes a compute method written in C to iterate over slices of each array, namely: > _ufunc.compute(slice, data, func, swap, conv) The 'func' argument is a CFuncObject, while the 'swap' and 'conv' arguments are lists of CFuncObjects for those arrays needing pre- or post-processing, otherwise None is used. The data argument is a list of buffer objects, and the slice argument gives the number of iterations for each dimension along with the buffer offset and step size for each array and each dimension. We have predefined several UFuncs for use by the __call__ method: cast, swap, getobj, and setobj. The cast and swap functions do coercion and byte-swapping, resp. and the getobj and setobj functions do coercion between Numeric arrays and Python sequences. The following attributes and methods are proposed for the core implementation. Attributes: .name: e.g. "add", "subtract", etc. .nargs: number of total arguments .iargs: number of input arguments .cfuncs (mapping): the set C functions .doc: documentation string Methods: __init__(): initialization __del__(): destruction __repr__(): representation __call__(): look-up and dispatch method initrule(): initialize coercion rule uninitrule(): uninitialize coercion rule register(): register a CUFunc unregister(): unregister a CUFunc C-API: This still needs to be fleshed-out. 3. Array: This class contains information about the array, such as shape, type, endian-ness of the data, etc.. Its operators, '+', '-', etc. just invoke the corresponding UFunc function, e.g. > def __add__(self, other): > return ufunc.add(self, other) The following attributes, methods, and functions are proposed for the core implementation. Attributes: .shape: shape of the array .format: type of the array .real (only complex): real part of a complex array .imag (only complex): imaginary part of a complex array Methods: __init__(): initialization __del__(): destruction __repr_(): representation __str__(): pretty representation __cmp__(): rich comparison __len__(): __getitem__(): __setitem__(): __getslice__(): __setslice__(): numeric methods: copy(): copy of array aslist(): create list from array asstring(): create string from array Functions: fromlist(): create array from sequence fromstring(): create array from string array(): create array with shape and value concat(): concatenate two arrays resize(): resize array C-API: This still needs to be fleshed-out. 4. ArrayView This class is similar to the Array class except that the reshape and flat methods will raise exceptions, since non-contiguous arrays cannot be reshaped or flattened using just pointer and step-size information. C-API: This still needs to be fleshed-out. 5. C-extension modules: Numeric2 will have several C-extension modules. a. _ufunc: The primary module of this set is the _ufuncmodule.c. The intention of this module is to do the bare minimum, i.e. iterate over arrays using a specified C function. The interface of these functions is the same as Numeric 1, i.e. int (*CFunc)(char *data, int *steps, int repeat, void *func); and their functionality is expected to be the same, i.e. they iterate over the inner-most dimension. The following attributes and methods are proposed for the core implementation. Attibutes: Methods: compute(): C-API: This still needs to be fleshed-out. b. _int32, _real64, etc.: There will also be C-extension modules for each array type, e.g. _int32module.c, _real64module.c, etc. As mentioned previously, when these modules are imported by the UFunc module, they will automatically register their functions and coercion rules. New or improved versions of these modules can be easily implemented and used without affecting the rest of Numeric 2. Open Issues 1. Does slicing syntax default to copy or view behavior? The default behavior of Python is to return a copy of a sub-list or tuple when slicing syntax is used, whereas Numeric 1 returns a view into the array. The choice made for Numeric 1 is apparently for reasons of performance: the developers wish to avoid the penalty of allocating and copying the data buffer during each array operation and feel that the need for a deepcopy of an array to be rare. Yet, some have argued that Numeric's slice notation should also have copy behavior to be consistent with Python lists. In this case the performance penalty associated with copy behavior can be minimized by implementing copy-on-write. This scheme has both arrays sharing one data buffer (as in view behavior) until either array is assigned new data at which point a copy of the data buffer is made. View behavior would then be implemented by an ArrayView class, whose behavior be similar to Numeric 1 arrays, i.e. .shape is not settable for non-contiguous arrays. The use of an ArrayView class also makes explicit what type of data the array contains. 2. Does item syntax default to copy or view behavior? A similar question arises with the item syntax. For example, if a = [[0,1,2], [3,4,5]] and b = a[0], then changing b[0] also changes a[0][0], because a[0] is a reference or view of the first row of a. Therefore, if c is a 2-d array, it would appear that c[i] should return a 1-d array which is a view into, instead of a copy of, c for consistency. Yet, c[i] can be considered just a shorthand for c[i,:] which would imply copy behavior assuming slicing syntax returns a copy. Should Numeric 2 behave the same way as lists and return a view or should it return a copy. 3. How is scalar coercion implemented? Python has fewer numeric types than Numeric which can cause coercion problems. For example when multiplying a Python scalar of type float and a Numeric array of type float, the Numeric array is converted to a double, since the Python float type is actually a double. This is often not the desired behavior, since the Numeric array will be doubled in size which is likely to be annoying, particularly for very large arrays. We prefer that the array type trumps the python type for the same type class, namely integer, float, and complex. Therefore an operation between a Python integer and an Int16 (short) array will return an Int16 array. Whereas an operation between a Python float and an Int16 array would return a Float64 (double) array. Operations between two arrays use normal coercion rules. 4. How is integer division handled? In a future version of Python, the behavior of integer division will change. The operands will be converted to floats, so the result will be a float. If we implement the proposed scalar coercion rules where arrays have precedence over Python scalars, then dividing an array by an integer will return an integer array and will not be consistent with a future version of Python which would return an array of type double. Scientific programmers are familiar with the distinction between integer and float-point division, so should Numeric 2 continue with this behavior? 5. How should records be implemented? There are two approaches to implementing records depending on your point-of-view. The first is two divide arrays into separate classes depending on the behavior of their types. For example numeric arrays are one class, strings a second, and records a third, because the range and type of operations of each class differ. As such, a record array is not a new type, but a mechanism for a more flexible form of array. To easily access and manipulate such complex data, the class is comprised of numeric arrays having different byte offsets into the data buffer. For example, one might have a table consisting of an array of Int16, Real32 values. Two numeric arrays, one with an offset of 0 bytes and a stride of 6 bytes to be interpeted as Int16, and one with an offset of 2 bytes and a stride of 6 bytes to be interpreted as Real32 would represent the record array. Both numeric arrays would refer to the same data buffer, but have different offset and stride attributes, and a different numeric type. The second approach is to consider a record as one of many array types, albeit with fewer, and possibly different, array operations than for numeric arrays. This approach considers an array type to be a mapping of a fixed-length string. The mapping can either be simple, like integer and floating-point numbers, or complex, like a complex number, a byte string, and a C-structure. The record type effectively merges the struct and Numeric modules into a multi-dimensional struct array. This approach implies certain changes to the array interface. For example, the 'typecode' keyword argument should probably be changed to the more descriptive 'format' keyword. a. How are record semantics defined and implemented? Which ever implementation approach is taken for records, the syntax and semantics of how they are to be accessed and manipulated must be decided, if one wishes to have access to sub-fields of records. In this case, the record type can essentially be considered an inhomogeneous list, like a tuple returned by the unpack method of the struct module; and a 1-d array of records may be interpreted as a 2-d array with the second dimension being the index into the list of fields. This enhanced array semantics makes access to an array of one or more of the fields easy and straightforward. It also allows a user to do array operations on a field in a natural and intuitive way. If we assume that records are implemented as an array type, then last dimension defaults to 0 and can therefore be neglected for arrays comprised of simple types, like numeric. 6. How are masked-arrays implemented? Masked-arrays in Numeric 1 are implemented as a separate array class. With the ability to add new array types to Numeric 2, it is possible that masked-arrays in Numeric 2 could be implemented as a new array type instead of an array class. 7. How are numerical errors handled (IEEE floating-point errors in particular)? It is not clear to the proposers (Paul Barrett and Travis Oliphant) what is the best or preferred way of handling errors. Since most of the C functions that do the operation, iterate over the inner-most (last) dimension of the array. This dimension could contain a thousand or more items having one or more errors of differing type, such as divide-by-zero, underflow, and overflow. Additionally, keeping track of these errors may come at the expense of performance. Therefore, we suggest several options: a. Print a message of the most severe error, leaving it to the user to locate the errors. b. Print a message of all errors that occurred and the number of occurrences, leaving it to the user to locate the errors. c. Print a message of all errors that occurred and a list of where they occurred. d. Or use a hybrid approach, printing only the most severe error, yet keeping track of what and where the errors occurred. This would allow the user to locate the errors while keeping the error message brief. 8. What features are needed to ease the integration of FORTRAN libraries and code? It would be a good idea at this stage to consider how to ease the integration of FORTRAN libraries and user code in Numeric 2. Implementation Steps 1. Implement basic UFunc capability a. Minimal Array class: Necessary class attributes and methods, e.g. .shape, .data, .type, etc. b. Minimal ArrayType class: Int32, Real64, Complex64, Char, Object c. Minimall UFunc class: UFunc instantiation, CFunction registration, UFunc call for 1-D arrays including the rules for doing alignment, byte-swapping, and coercion. d. Minimal C-extension module: _UFunc, which does the innermost array loop in C. This step implements whatever is needed to do: 'c = add(a, b)' where a, b, and c are 1-D arrays. It teaches us how to add new UFuncs, to coerce the arrays, to pass the necessary information to a C iterator method and to do the actually computation. 2. Continue enhancing the UFunc iterator and Array class a. Implement some access methods for the Array class: print, repr, getitem, setitem, etc. b. Implement multidimensional arrays c. Implement some of basic Array methods using UFuncs: +, -, *, /, etc. d. Enable UFuncs to use Python sequences. 3. Complete the standard UFunc and Array class behavior a. Implement getslice and setslice behavior b. Work on Array broadcasting rules c. Implement Record type 4. Add additional functionality a. Add more UFuncs b. Implement buffer or mmap access Incompatibilities The following is a list of incompatibilities in behavior between Numeric 1 and Numeric 2. 1. Scalar corcion rules Numeric 1 has single set of coercion rules for array and Python numeric types. This can cause unexpected and annoying problems during the calculation of an array expression. Numeric 2 intends to overcome these problems by having two sets of coercion rules: one for arrays and Python numeric types, and another just for arrays. 2. No savespace attribute The savespace attribute in Numeric 1 makes arrays with this attribute set take precedence over those that do not have it set. Numeric 2 will not have such an attribute and therefore normal array coercion rules will be in effect. 3. Slicing syntax returns a copy The slicing syntax in Numeric 1 returns a view into the original array. The slicing behavior for Numeric 2 will be a copy. You should use the ArrayView class to get a view into an array. 4. Boolean comparisons return a boolean array A comparison between arrays in Numeric 1 results in a Boolean scalar, because of current limitations in Python. The advent of Rich Comparisons in Python 2.1 will allow an array of Booleans to be returned. 5. Type characters are depricated Numeric 2 will have an ArrayType class composed of Type instances, for example Int8, Int16, Int32, and Int for signed integers. The typecode scheme in Numeric 1 will be available for backward compatibility, but will be depricated. Appendices A. Implicit sub-arrays iteration A computer animation is composed of a number of 2-D images or frames of identical shape. By stacking these images into a single block of memory, a 3-D array is created. Yet the operations to be performed are not meant for the entire 3-D array, but on the set of 2-D sub-arrays. In most array languages, each frame has to be extracted, operated on, and then reinserted into the output array using a for-like loop. The J language allows the programmer to perform such operations implicitly by having a rank for the frame and array. By default these ranks will be the same during the creation of the array. It was the intention of the Numeric 1 developers to implement this feature, since it is based on the language J. The Numeric 1 code has the required variables for implementing this behavior, but was never implemented. We intend to implement implicit sub-array iteration in Numeric 2, if the array broadcasting rules found in Numeric 1 do not fully support this behavior. Copyright This document is placed in the public domain. Related PEPs PEP 207: Rich Comparisons by Guido van Rossum and David Ascher PEP 208: Reworking the Coercion Model by Neil Schemenauer and Marc-Andre' Lemburg PEP 211: Adding New Linear Algebra Operators to Python by Greg Wilson PEP 225: Elementwise/Objectwise Operators by Huaiyu Zhu PEP 228: Reworking Python's Numeric Model by Moshe Zadka References [1] P. Greenfield 2000. private communication. From fdrake at acm.org Fri Feb 9 04:51:34 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 8 Feb 2001 22:51:34 -0500 (EST) Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: <20010205141139.K733@thrak.cnri.reston.va.us> References: <3A7EF1A0.EDA4AD24@lemburg.com> <20010205141139.K733@thrak.cnri.reston.va.us> Message-ID: <14979.26950.415841.24705@cj42289-a.reston1.va.home.com> Andrew Kuchling writes: > * Work on the Batteries Included proposals & required infrastructure I'd certainly like to see some machinery that allows us to incorporate arbitrary distutils-based packages in Python source and binary distributions and have them built, tested, and installed alongside the interpreter core. I think this would be the right approach to deal with many components, including the XML and curses components. -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From moshez at zadka.site.co.il Fri Feb 9 11:35:33 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Fri, 9 Feb 2001 12:35:33 +0200 (IST) Subject: [Python-Dev] PEP announcements, and summaries In-Reply-To: <14979.26950.415841.24705@cj42289-a.reston1.va.home.com> References: <14979.26950.415841.24705@cj42289-a.reston1.va.home.com>, <3A7EF1A0.EDA4AD24@lemburg.com> <20010205141139.K733@thrak.cnri.reston.va.us> Message-ID: <20010209103533.E7EA3A840@darjeeling.zadka.site.co.il> On Thu, 8 Feb 2001, "Fred L. Drake, Jr." wrote: > I'd certainly like to see some machinery that allows us to > incorporate arbitrary distutils-based packages in Python source and > binary distributions and have them built, tested, and installed > alongside the interpreter core. > I think this would be the right approach to deal with many > components, including the XML and curses components. You can take a look at PEP-0206. I'd appreciate any feedback! (And of course, come to the DevDay session) -- For public key: finger moshez at debian.org | gpg --import Debian - All the power, without the silly hat. From mal at lemburg.com Fri Feb 9 14:59:54 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 14:59:54 +0100 Subject: [Python-Dev] Removing the implicit str() call from printing API Message-ID: <3A83F7DA.A94AB88E@lemburg.com> There was some discussion about this subject before, but nothing much happened, so here we go again... Printing in Python is a rather complicated task. It involves many different APIs, flags, etc. Deep down in the printing machinery there is a hidden call to str() which converts the to be printed object into a string object. This is fine for non-string objects like numbers, but causes trouble when it comes to printing Unicode objects due to the auto-conversions this causes. There is a patch on SF which tries to remedy this, but it introduces a special attribute to maintain backward compatibility: http://sourceforge.net/patch/?func=detailpatch&patch_id=103685&group_id=5470 I don't really like the idea to add such an attribute to the file object. Instead, I think that we should simply pass along Unicode objects as-is to the file object's .write() method and have the method take care of the conversion. This will break some code, since not all file-like objects expect non-strings as input to the .write() method, but I think this small code breakage is worth it as it allows us to redirect printing to streams which convert Unicode input into a specific output encoding. Thoughts ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From Barrett at stsci.edu Fri Feb 9 16:45:50 2001 From: Barrett at stsci.edu (Paul Barrett) Date: Fri, 9 Feb 2001 10:45:50 -0500 (EST) Subject: [Python-Dev] A Numerical Python BoF at Python 9 Message-ID: <14980.2832.659186.913578@nem-srvr.stsci.edu> I've been encouraged to set-up a BoF at Python 9 to discuss Numerical Python issues, specifically the design and implemenation of Numeric 2. I'd like to get a head count of those interested in attending such a BoF. So far there are 3 of us at STScI who are interested. -- Dr. Paul Barrett Space Telescope Science Institute Phone: 410-338-4475 ESS/Science Software Group FAX: 410-338-4767 Baltimore, MD 21218 From tiemann at redhat.com Fri Feb 9 16:53:53 2001 From: tiemann at redhat.com (Michael Tiemann) Date: Fri, 09 Feb 2001 10:53:53 -0500 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta References: Message-ID: <3A841291.CAAAA3AD@redhat.com> Based on the responses I have seen, it appears that this is not the kind of issue we want to address in a .1 release. I talked with Matt Wilson, the most active Python developer here, and he's all for moving to 2.x for our next .0 product, but for compatibility reasons it sounds like the option of swapping 1.5 for 2.0 as python, or the requirement that both 1.5 and 2.x need to be on the core OS CD (which is always short of space) is problematic. OTOH, if somebody can make a really definitive statement that I've misinterpreted the responses, and that 2.x _as_ python should just work, and if it doesn't, it's a bug that needs to shake out, I can address that with our OS team. M Sean 'Shaleh' Perry wrote: > > On 07-Feb-2001 Moshe Zadka wrote: > > On Wed, 07 Feb 2001 02:39:11 -0500, Guido van Rossum > > wrote: > >> The binaries should be called python1.5 and python2.0, and python > >> should be a symlink to whatever is the default one. > > > > No they shouldn't. Joey Hess wrote to debian-python about the problems > > such a scheme caused when Perl5.005 and Perl 5.6 tried to coexist. > > Guido, the problem lies in we have no default. The user may install only 2.x > or 1.5. Scripts that handle the symlink can fail and then the user is left > without a python. In the case where only one is installed, this is easy. > however in a packaged system where any number of pythons could exist, problems > arise. > > Now, the problem with perl was a bad one because the thing in charge of the > symlink was itself a perl script. From nas at python.ca Fri Feb 9 17:21:36 2001 From: nas at python.ca (Neil Schemenauer) Date: Fri, 9 Feb 2001 08:21:36 -0800 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <3A841291.CAAAA3AD@redhat.com>; from tiemann@redhat.com on Fri, Feb 09, 2001 at 10:53:53AM -0500 References: <3A841291.CAAAA3AD@redhat.com> Message-ID: <20010209082136.A15525@glacier.fnational.com> On Fri, Feb 09, 2001 at 10:53:53AM -0500, Michael Tiemann wrote: > OTOH, if somebody can make a really definitive statement that I've > misinterpreted the responses, and that 2.x _as_ python should just work, > and if it doesn't, it's a bug that needs to shake out, I can address that > with our OS team. I'm not sure what you mean by "should just work". Source compatibility between 1.5.2 and 2.0 is very high. The 2.0 NEWS file should list all the changes (single argument append and socket addresses are the big ones). The two versions are _not_ binary compatible. Python bytecode and extension modules have to be recompiled. I don't know if this is a problem for the Red Hat 7.1 release. Neil From esr at thyrsus.com Fri Feb 9 17:30:17 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 9 Feb 2001 11:30:17 -0500 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010209082136.A15525@glacier.fnational.com>; from nas@python.ca on Fri, Feb 09, 2001 at 08:21:36AM -0800 References: <3A841291.CAAAA3AD@redhat.com> <20010209082136.A15525@glacier.fnational.com> Message-ID: <20010209113017.A13505@thyrsus.com> Neil Schemenauer : > I'm not sure what you mean by "should just work". Source > compatibility between 1.5.2 and 2.0 is very high. The 2.0 NEWS > file should list all the changes (single argument append and > socket addresses are the big ones). And that change only affected a misfeature that was never documented and has been deprecated for some time. -- Eric S. Raymond No kingdom can be secured otherwise than by arming the people. The possession of arms is the distinction between a freeman and a slave. -- "Political Disquisitions", a British republican tract of 1774-1775 From fredrik at pythonware.com Fri Feb 9 17:37:16 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Fri, 9 Feb 2001 17:37:16 +0100 Subject: [Python-Dev] PEPS, version control, release intervals References: <200102080726.AAA27240@localhost.localdomain> Message-ID: <0aab01c092b6$917e4a90$e46940d5@hagrid> Uche Ogbuji wrote: > I don't know the tale of SOAP. soaplib seems stuck at 0.8. it's stuck on 0.9.5, which is stuck in a perforce repository, waiting for more interoperability testing. real soon now... Cheers /F From mal at lemburg.com Fri Feb 9 18:05:15 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 18:05:15 +0100 Subject: [Python-Dev] Making the __import__ hook available early... Message-ID: <3A84234B.A7417A93@lemburg.com> There has been some discussion on the import-sig about using the __import__ hook for practically all imports, even early in the startup phase. This allows import hooks to completely take over the import mechanism even for the Python standard lib. Thomas Heller has provided a patch which I am currently checking. Basically all C level imports using PyImport_ImportModule() are then redirected to PyImport_Import() which uses the __import__ hook if available. My testing has so far not produced any strange effects. If anyone objects to this change, please speak up. Else, I'll check it in later today. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thomas.heller at ion-tof.com Fri Feb 9 18:20:55 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Fri, 9 Feb 2001 18:20:55 +0100 Subject: [Python-Dev] Making the __import__ hook available early... References: <3A84234B.A7417A93@lemburg.com> Message-ID: <024a01c092bc$a903f650$e000a8c0@thomasnotebook> > There has been some discussion on the import-sig about using > the __import__ hook for practically all imports, even early > in the startup phase. This allows import hooks to completely take > over the import mechanism even for the Python standard lib. > > Thomas Heller has provided a patch which I am currently checking. > Basically all C level imports using PyImport_ImportModule() > are then redirected to PyImport_Import() which uses the __import__ > hook if available. > > My testing has so far not produced any strange effects. If anyone > objects to this change, please speak up. Else, I'll check it in later > today. One remaining difference I noted between running 'rt.bat -d' with the CVS version and the patched version is that the former reported [56931 refs] and the latter [56923 refs]. Thomas From mal at lemburg.com Fri Feb 9 18:35:56 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 18:35:56 +0100 Subject: [Python-Dev] Making the __import__ hook available early... References: <3A84234B.A7417A93@lemburg.com> <024a01c092bc$a903f650$e000a8c0@thomasnotebook> Message-ID: <3A842A7C.46263743@lemburg.com> Thomas Heller wrote: > > > There has been some discussion on the import-sig about using > > the __import__ hook for practically all imports, even early > > in the startup phase. This allows import hooks to completely take > > over the import mechanism even for the Python standard lib. > > > > Thomas Heller has provided a patch which I am currently checking. > > Basically all C level imports using PyImport_ImportModule() > > are then redirected to PyImport_Import() which uses the __import__ > > hook if available. > > > > My testing has so far not produced any strange effects. If anyone > > objects to this change, please speak up. Else, I'll check it in later > > today. > > One remaining difference I noted between running 'rt.bat -d' with > the CVS version and the patched version is that the former > reported [56931 refs] and the latter [56923 refs]. This is probably due to the interning of strings; nothing to worry about, I guess. The patch implements the same refcounting as before the patch, so it is clearly not the cause of the different figures. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From jeremy at alum.mit.edu Fri Feb 9 18:45:04 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 9 Feb 2001 12:45:04 -0500 (EST) Subject: [Python-Dev] PEP status and python-dev summaries Message-ID: <14980.11424.134036.495048@w221.z064000254.bwi-md.dsl.cnc.net> I just scanned the responses on comp.lang.python to Andrew's announcement that he would stopping write the python-dev summaries. The respondents indicated that they found it hard to keep track of what was going on with python development, particularly PEPs. We're still learning how to use the PEP process. It's been better for 2.1 than for 2.0, but still has some problems. It sounds like the key problem has been involving the community outside python-dev. I would suggest a couple of changes, with the burden mostly falling on Barry and me: - Regular announcements of PEP creation and PEP status changes should be posted to comp.lang.python and c.l.p.a. - The release status PEPs should be kept up to date and regularly posted to the same groups. - We should have highly visible pointers from python.org to PEPs and other python development information. I'm sure we do this as part of the Zopification plans that Guido mentioned. - We should not approve PEPs that aren't announced on comp.lang.python with enough time for people to comment. Jeremy From skip at mojam.com Fri Feb 9 19:08:05 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 9 Feb 2001 12:08:05 -0600 (CST) Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: <20010209113017.A13505@thyrsus.com> References: <3A841291.CAAAA3AD@redhat.com> <20010209082136.A15525@glacier.fnational.com> <20010209113017.A13505@thyrsus.com> Message-ID: <14980.12805.682859.719700@beluga.mojam.com> Eric> Neil Schemenauer : >> I'm not sure what you mean by "should just work". Source >> compatibility between 1.5.2 and 2.0 is very high. The 2.0 NEWS file >> should list all the changes (single argument append and socket >> addresses are the big ones). Eric> And that change only affected a misfeature that was never Eric> documented and has been deprecated for some time. Perhaps, but it had worked "forever". In fact, I seems to recall that example code in the Python distribution used the two-argument connect call for sockets. Skip From akuchlin at mems-exchange.org Fri Feb 9 20:35:26 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Fri, 09 Feb 2001 14:35:26 -0500 Subject: [Python-Dev] dl module Message-ID: The dl module isn't automatically compiled by setup.py, and at least one patch on SourceForge adds it. Question: should it be compiled as a standard module? Using it can, according to the comments, cause core dumps if you're not careful. Question: does anyone actually use the dl module? If not, maybe it could be dropped. --amk From mal at lemburg.com Fri Feb 9 20:46:01 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 20:46:01 +0100 Subject: [Python-Dev] PEP announcements, and summaries References: <3A7EF1A0.EDA4AD24@lemburg.com> <20010205141139.K733@thrak.cnri.reston.va.us> <14979.26950.415841.24705@cj42289-a.reston1.va.home.com> Message-ID: <3A8448F9.DCACBBBB@lemburg.com> "Fred L. Drake, Jr." wrote: > > Andrew Kuchling writes: > > * Work on the Batteries Included proposals & required infrastructure > > I'd certainly like to see some machinery that allows us to > incorporate arbitrary distutils-based packages in Python source and > binary distributions and have them built, tested, and installed > alongside the interpreter core. > I think this would be the right approach to deal with many > components, including the XML and curses components. Good idea... but then I've made the experience that different tools need different distutils command interfaces, e.g. my mx tools will use customized commands which provide extra functionality (e.g. some auto-configuration code) which is not present in the standard distutils distro. As a result we will have a common interface point (setup.py), but not necessarily the same commands and/or options. Still, this situation is already *much* better than having different install mechanisms altogether. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mal at lemburg.com Fri Feb 9 20:54:17 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 20:54:17 +0100 Subject: [Python-Dev] dl module References: Message-ID: <3A844AE9.AE2DD04@lemburg.com> Andrew Kuchling wrote: > > The dl module isn't automatically compiled by setup.py, and at least > one patch on SourceForge adds it. > > Question: should it be compiled as a standard module? Using it can, > according to the comments, cause core dumps if you're not careful. > > Question: does anyone actually use the dl module? If not, maybe it > could be dropped. For Windows there's a similar package (calldll I think it is called). Perhaps someone should take over maintenance for it and then make it available via Parnassus ?! The same could be done for e.g. soundex and other deprecated modules. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From guido at digicool.com Fri Feb 9 20:58:58 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 09 Feb 2001 14:58:58 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/plat-irix5 cddb.py,1.10,1.11 In-Reply-To: Your message of "Fri, 09 Feb 2001 14:39:36 EST." <20010209143936.B3340@thrak.cnri.reston.va.us> References: <20010209143936.B3340@thrak.cnri.reston.va.us> Message-ID: <200102091958.OAA23039@cj20424-a.reston1.va.home.com> > On Fri, Feb 09, 2001 at 08:44:51AM -0800, Eric S. Raymond wrote: > >String method conversion. Andrew replied: > Regarding the large number of string method conversion check-ins: I > presume this is something else you discussed at LWE with Guido. Was > there anything else discussed that python-dev should know about, or > can help with? This was Eric's own initiative -- I was just as surprised as you, given the outcome of the last discussion on python-dev specifically about this. However, I don't mind that it's done, as long as there's no code breakage. Clearly, Eric went a bit fast for some modules (checking in syntax errors :-). --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Fri Feb 9 21:03:29 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 9 Feb 2001 15:03:29 -0500 Subject: [Python-Dev] Curious comment in some old libraries Message-ID: <20010209150329.A15086@thyrsus.com> Pursuant to a conversation Guido and I had in New York, I have gone through and converted the Python library code to use string methods wherever possible, removing a whole boatload of "import string" statements in the process. (This is one of those times when it's a really, *really* good thing that most modules have an attached self-test. I supplied a couple of these where they were lacking, and improved several of the existing test jigs.) One side-effect of the change is that nothing in the library uses splitfields or joinfields anymore. But in the process I found a curious contradiction: stringold.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! stringold.py: (split and splitfields are synonymous) stringold.py:splitfields = split string.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! string.py: (split and splitfields are synonymous) string.py:splitfields = split It certainly looks to me as though the "NB" comment is out of date. Is there some subtle and wicked reason it has not been removed? -- Eric S. Raymond This would be the best of all possible worlds, if there were no religion in it. -- John Adams, in a letter to Thomas Jefferson. From tim.one at home.com Fri Feb 9 21:04:15 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 9 Feb 2001 15:04:15 -0500 Subject: [Python-Dev] RE: global, was Re: None assigment In-Reply-To: <961fg0$etd$1@nnrp1.deja.com> Message-ID: [Jeremy Hylton] > As Tim will explain in a post that hasn't made it to DejaNews yet, > earlier versions of Python did not define Neither does 2.1: changing the implementation didn't change the Ref Man, and the Ref Man still declines to define the semantics or promise that the behavior today will persist tomorrow. > the behavior of assignment Or any other reference. > before a global statement. > ... > It's unclear what we should happen in this case. It could be an error, > since it's dodgy and the behavior will change with 2.1. "Undefined behavior" is unPythonic and should be wiped out whenever possible. That these things were dodgy was known from the start, but when the language was just getting off the ground there were far more important things to do than generate errors for every conceivable abuse of the language. Now that the language is still getting off the ground , that's still true. But changes in the meantime have made it much easier to identify some of these cases; like: > The recent round of compiler changes uses separate passes to determine a > name's scope and to generate code for loads and stores. The behavior of "global x" after a reference to x has never been defined, but it's never been reasonably easy to identify and complain about it. Now that name classification is done by design instead of by an afterthought "optimization pass", it should be much easier to gripe. +1 on making this an error now. And if 2.1 is relaxed to again allow "import *" at function scope in some cases, either that should at least raise a warning, or the Ref Man should be changed to say that's a defined use of the language. ambiguity-sucks-ly y'rs - tim From akuchlin at cnri.reston.va.us Fri Feb 9 21:04:54 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Fri, 9 Feb 2001 15:04:54 -0500 Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: <20010209150329.A15086@thyrsus.com>; from esr@thyrsus.com on Fri, Feb 09, 2001 at 03:03:29PM -0500 References: <20010209150329.A15086@thyrsus.com> Message-ID: <20010209150454.E3340@thrak.cnri.reston.va.us> On Fri, Feb 09, 2001 at 03:03:29PM -0500, Eric S. Raymond wrote: >Pursuant to a conversation Guido and I had in New York, I have gone through >string.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > >It certainly looks to me as though the "NB" comment is out of date. >Is there some subtle and wicked reason it has not been removed? Actually I think it's correct: >>> import string >>> string.split('a b c') ['a', 'b', 'c'] >>> string.split('a b c', ' ') ['a', '', 'b', 'c'] With no separator, it splits on runs of whitespace. With an explicit separator, it splits on *exactly* that separator. --amk From fdrake at acm.org Fri Feb 9 21:03:13 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 9 Feb 2001 15:03:13 -0500 (EST) Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: <20010209150329.A15086@thyrsus.com> References: <20010209150329.A15086@thyrsus.com> Message-ID: <14980.19713.280194.344112@cj42289-a.reston1.va.home.com> Eric S. Raymond writes: > string.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > string.py: (split and splitfields are synonymous) > string.py:splitfields = split > > It certainly looks to me as though the "NB" comment is out of date. > Is there some subtle and wicked reason it has not been removed? The comment is correct. splitfields(s) is synonymous with split(s), and splitfields(s, ' ') is synonymous with split(s, ' '). If the second arg is omitted, any stretch of whitespace is used as the separator, but if ' ' is supplied, exactly one space is used to split fields. split(s, None) is synonymous with split(s), splitfields(s), and splitfields(s, None). -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From guido at digicool.com Fri Feb 9 21:08:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 09 Feb 2001 15:08:11 -0500 Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: Your message of "Fri, 09 Feb 2001 15:03:29 EST." <20010209150329.A15086@thyrsus.com> References: <20010209150329.A15086@thyrsus.com> Message-ID: <200102092008.PAA23192@cj20424-a.reston1.va.home.com> > Pursuant to a conversation Guido and I had in New York, I have gone > through and converted the Python library code to use string methods > wherever possible, removing a whole boatload of "import string" > statements in the process. (But note that I didn't ask you to go ahead and do it. Last time when I started doing this I got quite a few comments from python-dev readers who thought it was a bad idea, so I backed off. It's up to you to convince them now. :-) > (This is one of those times when it's a really, *really* good thing that > most modules have an attached self-test. I supplied a couple of these > where they were lacking, and improved several of the existing test jigs.) Excellent! > One side-effect of the change is that nothing in the library uses splitfields > or joinfields anymore. But in the process I found a curious contradiction: > > stringold.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > stringold.py: (split and splitfields are synonymous) > stringold.py:splitfields = split > string.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > string.py: (split and splitfields are synonymous) > string.py:splitfields = split > > It certainly looks to me as though the "NB" comment is out of date. > Is there some subtle and wicked reason it has not been removed? Well, split and splitfields really *are* synonymous, but split(s, ' ') is *not* the same as split(s). The latter is the same as split(s, None) which splits on stretches of arbitrary whitespace and ignores leading and trailing whitespace. So the NB is still true... --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at digicool.com Fri Feb 9 21:15:47 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 9 Feb 2001 15:15:47 -0500 Subject: [Python-Dev] Curious comment in some old libraries References: <20010209150329.A15086@thyrsus.com> Message-ID: <14980.20467.174809.644067@anthem.wooz.org> >>>>> "ESR" == Eric S Raymond writes: ESR> It certainly looks to me as though the "NB" comment is out of ESR> date. Is there some subtle and wicked reason it has not been ESR> removed? Look at stropmodule.c. split and splitfields have been identical at least since 08-Aug-1996. :) -------------------- snip snip -------------------- revision 2.23 date: 1996/08/08 19:16:15; author: guido; state: Exp; lines: +93 -17 Added lstrip() and rstrip(). Extended split() (and hence splitfields(), which is the same function) to support an optional third parameter giving the maximum number of delimiters to parse. -------------------- snip snip -------------------- -Barry From tim.one at home.com Fri Feb 9 21:19:25 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 9 Feb 2001 15:19:25 -0500 Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: <20010209150329.A15086@thyrsus.com> Message-ID: [Eric S. Raymond] > ... > But in the process I found a curious contradiction: > > stringold.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > stringold.py: (split and splitfields are synonymous) > stringold.py:splitfields = split > string.py:# NB: split(s) is NOT the same as splitfields(s, ' ')! > string.py: (split and splitfields are synonymous) > string.py:splitfields = split > > It certainly looks to me as though the "NB" comment is out of date. > Is there some subtle and wicked reason it has not been removed? It's 100% accurate, but 99% misleading. Plain 100% accurate would be: # NB: split(s) is NOT the same as split(s, ' '). # And, by the way, since split is the same as splitfields, # it follows that # split(s) is NOT the same as splitfields(s, ' '). # either. Even better is to get rid of the NB comments, so I just did that. Thanks for pointing it out! From esr at thyrsus.com Fri Feb 9 21:23:35 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 9 Feb 2001 15:23:35 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/plat-irix5 cddb.py,1.10,1.11 In-Reply-To: <200102091958.OAA23039@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Feb 09, 2001 at 02:58:58PM -0500 References: <20010209143936.B3340@thrak.cnri.reston.va.us> <200102091958.OAA23039@cj20424-a.reston1.va.home.com> Message-ID: <20010209152335.C15205@thyrsus.com> Guido van Rossum : > Clearly, Eric went a bit fast for some modules > (checking in syntax errors :-). It was the oddest thing. The conversion was so mechanical that I found my attention wandering -- the result (as I noted in a couple of checkin comments) was that I occasionally hit ^C^C and triggered the commit a step too early. Sometimes Emacs makes things too easy! There were a couple of platform-specific modules I couldn't test completely, stuff like the two cddb.py versions. Other than that I'm pretty sure I didn't break anything. Where the test jigs looked lacking I beefed them up some. The only string imports left are the ones that have to be there because the code is using a string module constant like string.whitespace or one of the two odd functions that don't exist as methods, zfill and maketrans. Are there any plans to introduce boolean-valued string methods corresponding to the ctype.h functions? That would make it possible to remove most of the remaining imports. This was like old times. pulling an all-nighter to clean up a language library. I did a *lot* of work like this on Emacs back in the early 1990s. Count your blessings; the Python libraries are in far better shape. -- Eric S. Raymond Certainly one of the chief guarantees of freedom under any government, no matter how popular and respected, is the right of the citizens to keep and bear arms. [...] the right of the citizens to bear arms is just one guarantee against arbitrary government and one more safeguard against a tyranny which now appears remote in America, but which historically has proved to be always possible. -- Hubert H. Humphrey, 1960 From guido at digicool.com Fri Feb 9 21:27:16 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 09 Feb 2001 15:27:16 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib/plat-irix5 cddb.py,1.10,1.11 In-Reply-To: Your message of "Fri, 09 Feb 2001 15:23:35 EST." <20010209152335.C15205@thyrsus.com> References: <20010209143936.B3340@thrak.cnri.reston.va.us> <200102091958.OAA23039@cj20424-a.reston1.va.home.com> <20010209152335.C15205@thyrsus.com> Message-ID: <200102092027.PAA23403@cj20424-a.reston1.va.home.com> > The only string imports left are the ones that have to be there because > the code is using a string module constant like string.whitespace or > one of the two odd functions that don't exist as methods, zfill and > maketrans. Are there any plans to introduce boolean-valued string > methods corresponding to the ctype.h functions? That would make > it possible to remove most of the remaining imports. Yes, these already exist, e.g. s.islower(), s.isspace(). Note that they are locale dependent. > This was like old times. pulling an all-nighter to clean up a language > library. I did a *lot* of work like this on Emacs back in the early > 1990s. Count your blessings; the Python libraries are in far better > shape. Thanks! --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik at effbot.org Fri Feb 9 21:45:50 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Fri, 9 Feb 2001 21:45:50 +0100 Subject: [Python-Dev] Curious comment in some old libraries References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> Message-ID: <00e401c092d9$4aaa30c0$e46940d5@hagrid> guido wrote: > (But note that I didn't ask you to go ahead and do it. Last time when > I started doing this I got quite a few comments from python-dev > readers who thought it was a bad idea, so I backed off. It's up to > you to convince them now. :-) footnote: SRE is designed to work (and is being used) under 1.5.2. since I'd rather not maintain two separate versions, I hope it's okay to back out of some of eric's changes... Cheers /F From guido at digicool.com Fri Feb 9 21:46:45 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 09 Feb 2001 15:46:45 -0500 Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: Your message of "Fri, 09 Feb 2001 21:45:50 +0100." <00e401c092d9$4aaa30c0$e46940d5@hagrid> References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <00e401c092d9$4aaa30c0$e46940d5@hagrid> Message-ID: <200102092046.PAA23571@cj20424-a.reston1.va.home.com> > footnote: SRE is designed to work (and is being used) > under 1.5.2. since I'd rather not maintain two separate > versions, I hope it's okay to back out of some of eric's > changes... Fine. Please add a comment to the "import string" statement to explain this! --Guido van Rossum (home page: http://www.python.org/~guido/) From thomas.heller at ion-tof.com Fri Feb 9 21:48:52 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Fri, 9 Feb 2001 21:48:52 +0100 Subject: [Python-Dev] Curious comment in some old libraries References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <00e401c092d9$4aaa30c0$e46940d5@hagrid> Message-ID: <04b601c092d9$b5f2ca40$e000a8c0@thomasnotebook> > > footnote: SRE is designed to work (and is being used) > under 1.5.2. since I'd rather not maintain two separate > versions, I hope it's okay to back out of some of eric's > changes... The same is documented for distutils... Thomas From esr at thyrsus.com Fri Feb 9 22:17:18 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 9 Feb 2001 16:17:18 -0500 Subject: [Python-Dev] Curious comment in some old libraries In-Reply-To: <00e401c092d9$4aaa30c0$e46940d5@hagrid>; from fredrik@effbot.org on Fri, Feb 09, 2001 at 09:45:50PM +0100 References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <00e401c092d9$4aaa30c0$e46940d5@hagrid> Message-ID: <20010209161718.F15205@thyrsus.com> Fredrik Lundh : > footnote: SRE is designed to work (and is being used) > under 1.5.2. since I'd rather not maintain two separate > versions, I hope it's okay to back out of some of eric's > changes... Not a problem for me. -- Eric S. Raymond It will be of little avail to the people, that the laws are made by men of their own choice, if the laws be so voluminous that they cannot be read, or so incoherent that they cannot be understood; if they be repealed or revised before they are promulgated, or undergo such incessant changes that no man, who knows what the law is to-day, can guess what it will be to-morrow. Law is defined to be a rule of action; but how can that be a rule, which is little known, and less fixed? -- James Madison, Federalist Papers 62 From tim.one at home.com Fri Feb 9 23:07:43 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 9 Feb 2001 17:07:43 -0500 Subject: [Python-Dev] Making the __import__ hook available early... In-Reply-To: <3A84234B.A7417A93@lemburg.com> Message-ID: [M.-A. Lemburg] > There has been some discussion on the import-sig about using > the __import__ hook for practically all imports, even early > in the startup phase. This allows import hooks to completely take > over the import mechanism even for the Python standard lib. > > Thomas Heller has provided a patch which I am currently checking. > Basically all C level imports using PyImport_ImportModule() > are then redirected to PyImport_Import() which uses the __import__ > hook if available. > > My testing has so far not produced any strange effects. If anyone > objects to this change, please speak up. Else, I'll check it in > later today. I don't understand the change, from the above. Neither exactly what it does nor why it's being done. So, impossible to say. Was the patch posted to SourceForge? Does it have a bad effect on startup time? Is there any *conceivable* way in which it could change semantics? Or, if not, what's the point? From skip at mojam.com Fri Feb 9 23:21:30 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 9 Feb 2001 16:21:30 -0600 (CST) Subject: [Python-Dev] dl module In-Reply-To: <3A844AE9.AE2DD04@lemburg.com> References: <3A844AE9.AE2DD04@lemburg.com> Message-ID: <14980.28010.224576.400800@beluga.mojam.com> MAL> The same could be done for e.g. soundex ... http://musi-cal.mojam.com/~skip/python/soundex.py S From mal at lemburg.com Fri Feb 9 23:32:14 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 09 Feb 2001 23:32:14 +0100 Subject: [Python-Dev] Making the __import__ hook available early... References: Message-ID: <3A846FEE.5BF5615A@lemburg.com> Tim Peters wrote: > > [M.-A. Lemburg] > > There has been some discussion on the import-sig about using > > the __import__ hook for practically all imports, even early > > in the startup phase. This allows import hooks to completely take > > over the import mechanism even for the Python standard lib. > > > > Thomas Heller has provided a patch which I am currently checking. > > Basically all C level imports using PyImport_ImportModule() > > are then redirected to PyImport_Import() which uses the __import__ > > hook if available. > > > > My testing has so far not produced any strange effects. If anyone > > objects to this change, please speak up. Else, I'll check it in > > later today. > > I don't understand the change, from the above. Neither exactly what it does > nor why it's being done. So, impossible to say. Was the patch posted to > SourceForge? Does it have a bad effect on startup time? Is there any > *conceivable* way in which it could change semantics? Or, if not, what's > the point? I've already checked it in, but for completeness ;-) ... The problem was that tools like Thomas Heller's pyexe, Gordon's installer and other similar tools which try to pack Python byte code into a single archive need to provide an import hook which then redirects imports to the archive. This was already well possible for third-party code, but some of the standard modules in the Python lib used PyImport_ImportModule() directly to import modules and this prevented the inclusion of the referenced modules in the archive. When no import hook is in place, the patch does not have any effect -- semantics are the same as before. Import performance for those few cases where PyImport_ImportModule() was used will be a tad slower, but probably negligable due to the overhead caused by the file IO. With the hook in place, the patch now properly redirects these low-level imports to the __import__ hook. Semantics will then be those which the __import__ hook defines. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From esr at thyrsus.com Fri Feb 9 23:51:52 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 9 Feb 2001 17:51:52 -0500 Subject: [Python-Dev] Propaganda of the deed and other topics In-Reply-To: <200102092008.PAA23192@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Feb 09, 2001 at 03:08:11PM -0500 References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> Message-ID: <20010209175152.H15205@thyrsus.com> Guido van Rossum : > (But note that I didn't ask you to go ahead and do it. Last time when > I started doing this I got quite a few comments from python-dev > readers who thought it was a bad idea, so I backed off. It's up to > you to convince them now. :-) I'd forgotten that discussion. But, as a general comment... Propaganda of the deed, Guido. Sometimes this crew is too reflexively conservative for my taste. I have a repertoire of different responses when my desire to make progress collides with such conservatism; one of them, when I don't see substantive objections and believe I can deal with the political fallout more easily than living with the technical problem, is to just freakin' go ahead and *do* it. This makes some people nervous. That's OK with me -- I'd rather be seen as a bit of a loose cannon than just another lump of inertia. (If nothing else, I find the primate-territoriality reactions I get from the people I occasionally piss off entertaining to watch.) I pick my shots carefully, however, and as a result people usually conclude after the fact that this week's cowboy maneuver was a good thing even if they were a touch irritated with me at the time. In the particular case of the string-method cleanup, I did get the impression in New York that you wanted to attack this problem but for some reason felt you could not. I am strongly predisposed to be helpful in such situations, and let the chips fall where they may. So try not to be surprised if I do more stuff like this -- in fact, if you really don't want me to go cowboy on you occasionally you probably shouldn't talk about your wish-list in my presence. On the other hand, feel very free to reverse me and slap me down if I pull something that oversteps the bounds of prudence or politeness. Firstly, I'm not thin-skinned that way; nobody with my working style can afford to be. Secondly, as the BDFL you have both the right and the responsibility to rein me in; if I weren't cool with that I wouldn't be here. > > (This is one of those times when it's a really, *really* good thing that > > most modules have an attached self-test. I supplied a couple of these > > where they were lacking, and improved several of the existing test jigs.) > > Excellent! One of the possible futures I see for myself in this group, if both of the library PEPs you and I have contemplated go through and become policy, is as Keeper Of The Libraries analogously to the way that Fred Drake is Keeper Of The Documentation. I would enjoy this role; if I grow into it, you can expect to see me do a lot more active maintainence of this kind. There's another level to this that I should try to explain...among the known hazards of being an international celebrity and famously successful project lead is that one can start to believe one is too good to do ordinary work. In order to prevent myself from become bogotified in this way, I try to have at least project going at all times in which I am a core contributor but *not* the top banana. And I deliberately look for a stable to muck out occasionally, as I did last night and as I would do on a larger scale if I were the library keeper. Python looks like being my `follower' project for the foreseeable future. Take that as a compliment, Guido, because it is meant as one both professionally and personally. This crew may be (probably is) the most tasteful, talented and mature development group I have ever had the privilege to work with. I still rue the fact that I couldn't get you guys to come work for VA... -- Eric S. Raymond Alcohol still kills more people every year than all `illegal' drugs put together, and Prohibition only made it worse. Oppose the War On Some Drugs! From tim.one at home.com Sat Feb 10 00:13:02 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 9 Feb 2001 18:13:02 -0500 Subject: [Python-Dev] Making the __import__ hook available early... In-Reply-To: <3A846FEE.5BF5615A@lemburg.com> Message-ID: [MAL] > I've already checked it in, but for completeness ;-) ... Thanks for the explanation. Sounds like a good idea to me too! From jeremy at alum.mit.edu Sat Feb 10 00:42:14 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 9 Feb 2001 18:42:14 -0500 (EST) Subject: [Python-Dev] Re: [Bug #131480] __test__() should auto-exec at compile time In-Reply-To: References: Message-ID: <14980.32854.34108.539649@w221.z064000254.bwi-md.dsl.cnc.net> I just closed the bug report quoted below with the following response: I don't agree that unit tests should run automatically. Nor do I think adding magic to the language to support unit tests is necessary when it is trivial to add some external mechanism. I guess this topic could be opened up for discussion if someone else disagrees with me. Regardless, though, it's too late for 2.1. Jeremy >>>>> ">" == noreply writes: >> Bug #131480, was updated on 2001-Feb-07 18:44 Here is a current >> snapshot of the bug. >> Details: We can make unit testing as simple as writing the test >> code! Everyone agrees that unit tests are worth while. Python >> does a great job removing tedium from the job of the programmer. >> Unit test should run automatically. Here's a method everyone can >> agree to: >> Have the compiler check each module for a funtion with the >> special name '__test__' that takes no arguments. If it finds it >> it calls it. >> The problem of unit testing divides esiliy into two pieces: How >> to create the code and how to execute the code. There are many >> options in creating the code but I have never seen any nice >> solutions to run the code automatically "if __name__ == >> '__main__':" >> doesn't count since you have to do somthing special to call the >> code i.e. >> run it as a script. There are of course ways to run the test >> code automatically but the ways I have figured out run it on >> every import (way too often especially for long tests). I >> imagine there is a way to check to see if the module is loaded >> from a .pyc file and execute test code accouringly but this seems >> a bit kludgy. Here are the benifits of compile time >> auto-execution: >> - Compatible with every testing framework. >> - Called always and only when it needs to be executed. >> - So simple even micro projects 'scripts' can take advantage >> Disadvantages: >> - Another special name, '__test__' >> - If there are more please tell me! >> I looked around the source-code and think I see the location >> where we can do this. It's would be a piece of cake and the >> advantages far outway the disadvantages. If I get some support >> I'd love to incorporate the fix. >> Justin Shaw thomas.j.shaw at aero.org From jeremy at alum.mit.edu Sat Feb 10 01:28:12 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 9 Feb 2001 19:28:12 -0500 (EST) Subject: [Python-Dev] Python 2.1 release schedule Message-ID: <14980.35612.516421.741505@w221.z064000254.bwi-md.dsl.cnc.net> I updated the Python 2.1 release schedule (PEP 226): http://python.sourceforge.net/peps/pep-0226.html The schedule now has some realistic future release dates. The plan is to move to beta 1 before the Python conference, probably issue a second beta in mid- to late-March, and aim for a final release sometime in April. The six-week period between first beta and final release is about as long as the beta period for 2.0, which had many more significant changes. I have also added a section on open issues as we had in the 2.0 release schedule. If you are responsible for any major changes or fixes before the first beta, please add them to that section or send me mail about them. Remember that we are in feature freeze; only bug fixes between now and beta 1. Jeremy From tim.one at home.com Sat Feb 10 01:18:54 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 9 Feb 2001 19:18:54 -0500 Subject: [Python-Dev] Re: [Bug #131480] __test__() should auto-exec at compile time In-Reply-To: <14980.32854.34108.539649@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: [Jeremy Hylton] > I just closed the bug report quoted below with the following response: > > I don't agree that unit tests should run automatically. Nor do I > think adding magic to the language to support unit tests is > necessary when it is trivial to add some external mechanism. > > I guess this topic could be opened up for discussion if someone else > disagrees with me. Regardless, though, it's too late for 2.1. Justin had earlier brought this up on Python-Help. I'll attach a nice PDF doc he sent with more detail than the bug report. I had asked him to consider a PEP and have a public debate first; don't know why he filed a bug report instead; I recall I got more email about this, but it's so far down the stack now I'm not sure I'll ever find it again . FWIW, I don't believe we should make this magical either, and there are practical problems that were overlooked; e.g., when Lib/ is on a read-only filesystem, Python *always* recompiles the libraries upon import. Not insurmountable, but again points out the need for open debate first. Justin, take it up on comp.lang.python. -------------- next part -------------- A non-text attachment was scrubbed... Name: IntegratedUnitTesting.pdf Type: application/pdf Size: 98223 bytes Desc: not available URL: From fdrake at acm.org Sat Feb 10 04:09:58 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 9 Feb 2001 22:09:58 -0500 (EST) Subject: [Python-Dev] dl module In-Reply-To: <14980.28010.224576.400800@beluga.mojam.com> References: <3A844AE9.AE2DD04@lemburg.com> <14980.28010.224576.400800@beluga.mojam.com> Message-ID: <14980.45318.877412.703109@cj42289-a.reston1.va.home.com> Skip Montanaro writes: > MAL> The same could be done for e.g. soundex ... > > http://musi-cal.mojam.com/~skip/python/soundex.py Given that Skip has published this module and that the C version can always be retrieved from CVS if anyone really wants it, and that soundex has been listed in the "Obsolete Modules" section in the documentation for quite some time, this is probably a good time to remove it from the source distribution. -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From fdrake at acm.org Sat Feb 10 04:21:20 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 9 Feb 2001 22:21:20 -0500 (EST) Subject: [Python-Dev] Propaganda of the deed and other topics In-Reply-To: <20010209175152.H15205@thyrsus.com> References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <20010209175152.H15205@thyrsus.com> Message-ID: <14980.46000.429567.347664@cj42289-a.reston1.va.home.com> Eric S. Raymond writes: > of them, when I don't see substantive objections and believe I can > deal with the political fallout more easily than living with the > technical problem, is to just freakin' go ahead and *do* it. I think this was the right thing to do in this case. A slap on the back for you! > One of the possible futures I see for myself in this group, if both of > the library PEPs you and I have contemplated go through and become > policy, is as Keeper Of The Libraries analogously to the way that Fred You haven't developed the right attitude, then: my self-granted title for this aspect of my efforts is "Documentation Tsar" -- and I don't mind exercising editorial control with my attitude firmly in place! ;-) > Python looks like being my `follower' project for the foreseeable > future. Take that as a compliment, Guido, because it is meant as one > both professionally and personally. This crew may be (probably is) > the most tasteful, talented and mature development group I have ever Thank you! That's a real compliment for all of us. > had the privilege to work with. I still rue the fact that I couldn't > get you guys to come work for VA... You & others from VA came mighty close! -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From mal at lemburg.com Sat Feb 10 13:43:39 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 10 Feb 2001 13:43:39 +0100 Subject: [Python-Dev] Removing the implicit str() call from printing API References: <3A83F7DA.A94AB88E@lemburg.com> Message-ID: <3A85377B.BC6EAB9B@lemburg.com> So far, noone has commented on this idea. I would like to go ahead and check in patch which passes through Unicode objects to the file-object's .write() method while leaving the standard str() call for all other objects in place. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ "M.-A. Lemburg" wrote: > > There was some discussion about this subject before, but nothing > much happened, so here we go again... > > Printing in Python is a rather complicated task. It involves many > different APIs, flags, etc. Deep down in the printing machinery > there is a hidden call to str() which converts the to be printed > object into a string object. > > This is fine for non-string objects like numbers, but causes trouble > when it comes to printing Unicode objects due to the auto-conversions > this causes. > > There is a patch on SF which tries to remedy this, but it introduces > a special attribute to maintain backward compatibility: > > http://sourceforge.net/patch/?func=detailpatch&patch_id=103685&group_id=5470 > > I don't really like the idea to add such an attribute to the > file object. Instead, I think that we should simply pass along > Unicode objects as-is to the file object's .write() method and > have the method take care of the conversion. > > This will break some code, since not all file-like objects expect > non-strings as input to the .write() method, but I think this small > code breakage is worth it as it allows us to redirect printing > to streams which convert Unicode input into a specific output > encoding. > > Thoughts ? > > -- > Marc-Andre Lemburg > ______________________________________________________________________ > Company: http://www.egenix.com/ > Consulting: http://www.lemburg.com/ > Python Pages: http://www.lemburg.com/python/ > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev From fredrik at effbot.org Sat Feb 10 14:01:13 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Sat, 10 Feb 2001 14:01:13 +0100 Subject: [Python-Dev] Removing the implicit str() call from printing API References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> Message-ID: <010f01c09361$8ff82910$e46940d5@hagrid> mal wrote: > I would like to go ahead and check in patch which passes through > Unicode objects to the file-object's .write() method while leaving > the standard str() call for all other objects in place. +0 for Python 2.1 +1 for Python 2.2 Cheers /F From guido at digicool.com Sat Feb 10 15:03:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 09:03:03 -0500 Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: Your message of "Sat, 10 Feb 2001 14:01:13 +0100." <010f01c09361$8ff82910$e46940d5@hagrid> References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> <010f01c09361$8ff82910$e46940d5@hagrid> Message-ID: <200102101403.JAA27043@cj20424-a.reston1.va.home.com> > mal wrote: > > > I would like to go ahead and check in patch which passes through > > Unicode objects to the file-object's .write() method while leaving > > the standard str() call for all other objects in place. > > +0 for Python 2.1 > +1 for Python 2.2 I have not had the time to review any of the arguments for this, and I would be very disappointed if this happened without my involvement. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Sat Feb 10 15:15:19 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 09:15:19 -0500 Subject: [Python-Dev] dl module In-Reply-To: Your message of "Fri, 09 Feb 2001 22:09:58 EST." <14980.45318.877412.703109@cj42289-a.reston1.va.home.com> References: <3A844AE9.AE2DD04@lemburg.com> <14980.28010.224576.400800@beluga.mojam.com> <14980.45318.877412.703109@cj42289-a.reston1.va.home.com> Message-ID: <200102101415.JAA27165@cj20424-a.reston1.va.home.com> > Skip Montanaro writes: > > MAL> The same could be done for e.g. soundex ... > > > > http://musi-cal.mojam.com/~skip/python/soundex.py > > Given that Skip has published this module and that the C version can > always be retrieved from CVS if anyone really wants it, and that > soundex has been listed in the "Obsolete Modules" section in the > documentation for quite some time, this is probably a good time to > remove it from the source distribution. Yes, go ahead. --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at lemburg.com Sat Feb 10 15:22:30 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 10 Feb 2001 15:22:30 +0100 Subject: [Python-Dev] Removing the implicit str() call from printing API References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> <010f01c09361$8ff82910$e46940d5@hagrid> <200102101403.JAA27043@cj20424-a.reston1.va.home.com> Message-ID: <3A854EA6.B8A8F7E2@lemburg.com> Guido van Rossum wrote: > > > mal wrote: > > > > > I would like to go ahead and check in patch which passes through > > > Unicode objects to the file-object's .write() method while leaving > > > the standard str() call for all other objects in place. > > > > +0 for Python 2.1 > > +1 for Python 2.2 > > I have not had the time to review any of the arguments for this, and I > would be very disappointed if this happened without my involvement. Ok, I'll postpone this for 2.2 then... don't want to disappoint our BDFL ;-) Perhaps we should rethink the whole complicated printing machinery in Python for 2.2 and come up with a more generic solution to the problem of letting to-be-printed objects pass through to the stream objects ?! Note that this is needed in order to be able to redirect sys.stdout to a codec which then converts Unicode to some external encoding. Currently this is not possible due to the implicit str() call in PyObject_Print(). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From guido at digicool.com Sat Feb 10 15:32:36 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 09:32:36 -0500 Subject: [Python-Dev] Re: __test__() should auto-exec at compile time In-Reply-To: Your message of "Fri, 09 Feb 2001 19:18:54 EST." References: Message-ID: <200102101432.JAA27274@cj20424-a.reston1.va.home.com> Running tests automatically whenever the source code is compiled is a bad idea. Python advertises itself as an interpreted language where compilation is invisible to the user. Tests often have side effects or take up serious amounts of resources, which would make them far from invisible. (For example, the socket test forks off a process and binds a socket to a port. While this port is not likely to be used by another server, it's not impossible, and one common effect (for me :-) is to find that two test runs interfere with each other. The socket test also takes about 10 seconds to run.) There are lots of situations where compilation occurs during the normal course of events, even for standard modules, and certainly for 3rd party library modules (for which the .pyc files aren't always created at installation time). So, running __test__ at every compilation is a no-no for me. That said, there are sane alternatives: e.g. distutils could run the tests automatically whenever it is asked to either build or install. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Sat Feb 10 15:39:47 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 09:39:47 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: Your message of "Fri, 09 Feb 2001 19:28:12 EST." <14980.35612.516421.741505@w221.z064000254.bwi-md.dsl.cnc.net> References: <14980.35612.516421.741505@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102101439.JAA27319@cj20424-a.reston1.va.home.com> > I updated the Python 2.1 release schedule (PEP 226): > http://python.sourceforge.net/peps/pep-0226.html Thanks, Jeremy! > The schedule now has some realistic future release dates. The plan is > to move to beta 1 before the Python conference, probably issue a > second beta in mid- to late-March, and aim for a final release > sometime in April. The six-week period between first beta and final > release is about as long as the beta period for 2.0, which had many > more significant changes. Feels good to me. > I have also added a section on open issues as we had in the 2.0 > release schedule. If you are responsible for any major changes or > fixes before the first beta, please add them to that section or send > me mail about them. Remember that we are in feature freeze; only bug > fixes between now and beta 1. Here are a few issues that I wrote down recently. I'm a bit out of touch so some of these may already have been resolved... - New schema for .pyc magic number? (Eric, Tim) - Call to C function without keyword args should pass NULL, not {}. (Jeremy) - Reduce the errors for "from ... import *" to only those cases where it's a real problem for nested functions. (Jeremy) - Long ago, someone asked that 10**-15 should return a float rather than raise a ValueError. I think this is an OK change, and unlikely to break code :-) There may be a few other special cases like this, and of course ints and longs should act the same way. (Tim?) --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Sat Feb 10 16:43:42 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sat, 10 Feb 2001 10:43:42 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: <200102101439.JAA27319@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sat, Feb 10, 2001 at 09:39:47AM -0500 References: <14980.35612.516421.741505@w221.z064000254.bwi-md.dsl.cnc.net> <200102101439.JAA27319@cj20424-a.reston1.va.home.com> Message-ID: <20010210104342.A20657@thyrsus.com> Guido van Rossum : > - New schema for .pyc magic number? (Eric, Tim) It looked to me like Tim had a good scheme, but he never specified the latter (integrity-check) part of the header). -- Eric S. Raymond Everything that is really great and inspiring is created by the individual who can labor in freedom. -- Albert Einstein, in H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988. From jeremy at alum.mit.edu Sat Feb 10 05:57:51 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 9 Feb 2001 23:57:51 -0500 (EST) Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: <200102101439.JAA27319@cj20424-a.reston1.va.home.com> References: <14980.35612.516421.741505@w221.z064000254.bwi-md.dsl.cnc.net> <200102101439.JAA27319@cj20424-a.reston1.va.home.com> Message-ID: <14980.51791.171007.616771@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum writes: >> I have also added a section on open issues as we had in the 2.0 >> release schedule. If you are responsible for any major changes >> or fixes before the first beta, please add them to that section >> or send me mail about them. Remember that we are in feature >> freeze; only bug fixes between now and beta 1. GvR> Here are a few issues that I wrote down recently. I'm a bit GvR> out of touch so some of these may already have been resolved... [...] GvR> - Call to C function without keyword args should pass NULL, not GvR> {}. (Jeremy) GvR> - Reduce the errors for "from ... import *" to only those cases GvR> where it's a real problem for nested functions. (Jeremy) [...] These two are done and checked into CVS. Jeremy From guido at digicool.com Sat Feb 10 20:49:34 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 14:49:34 -0500 Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: Your message of "Sat, 10 Feb 2001 15:22:30 +0100." <3A854EA6.B8A8F7E2@lemburg.com> References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> <010f01c09361$8ff82910$e46940d5@hagrid> <200102101403.JAA27043@cj20424-a.reston1.va.home.com> <3A854EA6.B8A8F7E2@lemburg.com> Message-ID: <200102101949.OAA28167@cj20424-a.reston1.va.home.com> > Ok, I'll postpone this for 2.2 then... don't want to disappoint > our BDFL ;-) The alternative would be for you to summarize why the proposed change can't possibly break code, this late in the 2.1 release game. :-) > Perhaps we should rethink the whole complicated printing machinery > in Python for 2.2 and come up with a more generic solution to the > problem of letting to-be-printed objects pass through to the > stream objects ?! Yes, please! I'd love it if you could write up a PEP that analyzes the issues and proposes a solution. (Without an analysis of the issues, there's not much point in proposing a solution, IMO.) > Note that this is needed in order to be able to redirect sys.stdout > to a codec which then converts Unicode to some external encoding. > Currently this is not possible due to the implicit str() call in > PyObject_Print(). Excellent. I agree that it's a shame that Unicode I/O is so hard at the moment. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Sat Feb 10 20:54:17 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 14:54:17 -0500 Subject: [Python-Dev] Propaganda of the deed and other topics In-Reply-To: Your message of "Fri, 09 Feb 2001 17:51:52 EST." <20010209175152.H15205@thyrsus.com> References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <20010209175152.H15205@thyrsus.com> Message-ID: <200102101954.OAA28189@cj20424-a.reston1.va.home.com> Fine Eric. Thanks for the compliment! In this particular case, I believe that the resistance was more against any official indication that the string module would become obsolete, than against making the changes in the standard library. It was just deemed too much work to make the changes, and because string wasn't going to be obsolete soon, there was little motivation. I'm glad your manic episode took care of that. :-) In general, though, I must ask you to err on the careful side when the possibility of breaking existing code exists. You can apply the cowboy approach to discussions as well as to coding! > Alcohol still kills more people every year than all `illegal' drugs put > together, and Prohibition only made it worse. Oppose the War On Some Drugs! Hey, finally a signature quote someone from the Netherlands wouldn't find offensive! --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Sat Feb 10 21:00:03 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Sat, 10 Feb 2001 15:00:03 -0500 Subject: [Python-Dev] Propaganda of the deed and other topics In-Reply-To: <200102101954.OAA28189@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sat, Feb 10, 2001 at 02:54:17PM -0500 References: <20010209150329.A15086@thyrsus.com> <200102092008.PAA23192@cj20424-a.reston1.va.home.com> <20010209175152.H15205@thyrsus.com> <200102101954.OAA28189@cj20424-a.reston1.va.home.com> Message-ID: <20010210150003.A21451@thyrsus.com> Guido van Rossum : > In general, though, I must ask you to err on the careful side when the > possibility of breaking existing code exists. I try to. You notice I haven't committed any changes to the interpreter core. This is a good example of what I mean by picking my shots carefully... -- Eric S. Raymond The right of the citizens to keep and bear arms has justly been considered as the palladium of the liberties of a republic; since it offers a strong moral check against usurpation and arbitrary power of rulers; and will generally, even if these are successful in the first instance, enable the people to resist and triumph over them." -- Supreme Court Justice Joseph Story of the John Marshall Court From mwh21 at cam.ac.uk Sat Feb 10 21:46:27 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 10 Feb 2001 20:46:27 +0000 Subject: [Python-Dev] Status of Python in the Red Hat 7.1 beta In-Reply-To: Neil Schemenauer's message of "Fri, 9 Feb 2001 08:21:36 -0800" References: <3A841291.CAAAA3AD@redhat.com> <20010209082136.A15525@glacier.fnational.com> Message-ID: Neil Schemenauer writes: > On Fri, Feb 09, 2001 at 10:53:53AM -0500, Michael Tiemann wrote: > > OTOH, if somebody can make a really definitive statement that I've > > misinterpreted the responses, and that 2.x _as_ python should just work, > > and if it doesn't, it's a bug that needs to shake out, I can address that > > with our OS team. > > I'm not sure what you mean by "should just work". Source > compatibility between 1.5.2 and 2.0 is very high. The 2.0 NEWS > file should list all the changes (single argument append and > socket addresses are the big ones). The two versions are _not_ > binary compatible. Python bytecode and extension modules have to > be recompiled. I don't know if this is a problem for the Red Hat > 7.1 release. Another issue is that there is an increasing body of code out there that doesn't work with 1.5.2. Practically all the code I write uses string methods and/or augmented assignment, for example, and I occasionally get email saying "I tried to run your code and got this AttributeEror: join error message". Also there have been some small changes at the C API level around memory management, and I'd much rather program to Python 2.0 here because its APIs are *better*. The world will be a better place when everybody runs Python 2.x, and distributions make a lot of difference here. Just my ?0.02. Cheers, M. -- To summarise the summary of the summary:- people are a problem. -- The Hitch-Hikers Guide to the Galaxy, Episode 12 From mal at lemburg.com Sat Feb 10 23:43:37 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat, 10 Feb 2001 23:43:37 +0100 Subject: [Python-Dev] Removing the implicit str() call from printing API References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> <010f01c09361$8ff82910$e46940d5@hagrid> <200102101403.JAA27043@cj20424-a.reston1.va.home.com> <3A854EA6.B8A8F7E2@lemburg.com> <200102101949.OAA28167@cj20424-a.reston1.va.home.com> Message-ID: <3A85C419.99EDCF14@lemburg.com> Guido van Rossum wrote: > > > Ok, I'll postpone this for 2.2 then... don't want to disappoint > > our BDFL ;-) > > The alternative would be for you to summarize why the proposed change > can't possibly break code, this late in the 2.1 release game. :-) Well, the only code it could possibly break is code which 1. expects a unique string object as argument 2. uses the s# parser marker and is used with an Unicode object containing non-ASCII characters Unfortunately, I'm not sure about how much code is out there which assumes 1. cStringIO.c is one example and given its heritage, there probably is a lot more in the Zope camp ;-) > > Perhaps we should rethink the whole complicated printing machinery > > in Python for 2.2 and come up with a more generic solution to the > > problem of letting to-be-printed objects pass through to the > > stream objects ?! > > Yes, please! I'd love it if you could write up a PEP that analyzes > the issues and proposes a solution. (Without an analysis of the > issues, there's not much point in proposing a solution, IMO.) Ok... on the plane to the conference, maybe. > > Note that this is needed in order to be able to redirect sys.stdout > > to a codec which then converts Unicode to some external encoding. > > Currently this is not possible due to the implicit str() call in > > PyObject_Print(). > > Excellent. I agree that it's a shame that Unicode I/O is so hard at > the moment. Since this is what we're after here, we might as well consider possibilities to get the input side of things equally in line with the codec idea, e.g. what would happen if .read() returns a Unicode object ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From andy at reportlab.com Sun Feb 11 00:43:08 2001 From: andy at reportlab.com (Andy Robinson) Date: Sat, 10 Feb 2001 23:43:08 -0000 Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: Message-ID: > So far, noone has commented on this idea. > > I would like to go ahead and check in patch which passes through > Unicode objects to the file-object's .write() method while leaving > the standard str() call for all other objects in place. > I'm behind this in principle. Here's an example of why: >>> tokyo_utf8 = "??" # the kanji for Tokyo, trust me... >>> print tokyo_utf8 # this is 8-bit and prints fine ?????? >>> tokyo_uni = codecs.utf_8_decode(tokyo_utf8)[0] >>> print tokyo_uni # try to print the kanji Traceback (innermost last): File " ", line 1, in ? UnicodeError: ASCII encoding error: ordinal not in range(128) >>> Let's say I am generating HTML pages and working with Unicode strings containing data > 127. It is far more natural to write a lot of print statements than to have to (a) concatenate all my strings or (b) do this on every line that prints something: print tokyo_utf8.encode(my_encoding) We could trivially make a file object which knows to convert the output to, say, Shift-JIS, or even redirect sys.stdout to such an object. Then we could just print Unicode strings to it. Effectively, the decision on whether a string is printable is deferred to the printing device. I think this is a good pattern which encourages people to work in Unicode. I know nothing of the Python internals and cannot help weigh up how serious the breakage is, but it would be a logical feature to add. - Andy Robinson From ping at lfw.org Sun Feb 11 01:22:48 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 10 Feb 2001 16:22:48 -0800 (PST) Subject: [Python-Dev] Fatal scoping error from the twilight zone Message-ID: Houston, we may have a problem... The following harmless-looking function: def getpager(): """Decide what method to use for paging through text.""" if type(sys.stdout) is not types.FileType: return plainpager if not sys.stdin.isatty() or not sys.stdout.isatty(): return plainpager if os.environ.has_key('PAGER'): return lambda text: pipepager(text, os.environ['PAGER']) if sys.platform in ['win', 'win32', 'nt']: return lambda text: tempfilepager(text, 'more') if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: return lambda text: pipepager(text, 'less') import tempfile filename = tempfile.mktemp() open(filename, 'w').close() try: if hasattr(os, 'system') and os.system('more %s' % filename) == 0: return lambda text: pipepager(text, 'more') else: return ttypager finally: os.unlink(filename) produces localhost[1047]% ./python ~/dev/htmldoc/pydoc.py Fatal Python error: unknown scope for pipepager in getpager(5) in /home/ping/dev/htmldoc/pydoc.py Aborted (core dumped) localhost[1048]% with a clean build on a CVS tree that i updated just minutes ago. I was able to reduce this test case to the following: localhost[1011]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda t: pipepager(t, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) but not before coming across a bewildering series of working and non-working cases that left me wondering whether i was hallucinating. Strange as it may seem, for example, replacing the string constant 'x' with a variable makes the latter example work. Even stranger, choosing a different name for the variable t can make it work in some cases but not others! Please try the following script and see if you get weird results: code = '''def getpager(): if os.environ.has_key('x'): return lambda %s: pipepager(%s, os.environ['x'])''' import string, os, sys results = {} for char in string.letters: f = open('/tmp/test.py', 'w') f.write(code % (char, char) + '\n') f.close() sys.stderr.write('%s: ' % char) status = os.system('python /tmp/test.py > /dev/null') >> 8 sys.stderr.write('%s\n' % status) results.setdefault(status, []).append(char) for status in results.keys(): if not status: print 'Python likes these letters:', else: print 'Status %d for these letters:' % status, print results[status] I get this, consistently every time! Status 134 for these letters: ['b', 'c', 'd', 'g', 'h', 'j', 'k', 'l', 'o', 'p', 'r', 's', 't', 'w', 'x', 'z', 'B', 'C', 'D', 'G', 'H', 'J', 'K', 'L', 'O', 'P', 'R', 'S', 'T', 'W', 'X', 'Z'] Python likes these letters: ['a', 'e', 'f', 'i', 'm', 'n', 'q', 'u', 'v', 'y', 'A', 'E', 'F', 'I', 'M', 'N', 'Q', 'U', 'V', 'Y'] A complete log of my interactive sessions is attached. I hope somebody can reproduce at least some of this to assure me that i'm not going mad. :) -- ?!ng Happiness comes more from loving than being loved; and often when our affection seems wounded it is is only our vanity bleeding. To love, and to be hurt often, and to love again--this is the brave and happy life. -- J. E. Buchrose -------------- next part -------------- localhost[1001]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... """Decide what method to use for paging through text.""" ... if type(sys.stdout) is not types.FileType: ... return plainpager ... if not sys.stdin.isatty() or not sys.stdout.isatty(): ... return plainpager ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, os.environ['PAGER']) ... if sys.platform in ['win', 'win32', 'nt']: ... return lambda text: tempfilepager(text, 'more') ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text, 'less') ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1002]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... """Decide what method to use for paging through text.""" ... if type(sys.stdout) is not types.FileType: ... return plainpager ... if not sys.stdin.isatty() or not sys.stdout.isatty(): ... return plainpager ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, os.environ['PAGER']) ... if sys.platform in ['win', 'win32', 'nt']: ... return lambda text: tempfilepager(text, 'more') ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text, 'less') ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1003]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... return lambda text: pipepager(text) ... >>> def getpager(): ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, os.environ['PAGER']) ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text, 'less') ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1004]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def f(): ... if a: ... return lambda t: g(t) ... if b: ... return lambda t: h(t) ... >>> def getpager(): ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text) ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text) ... >>> def getpager(): ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, 1) ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text, 1) ... >>> def getpager(): ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, os.environ['PAGER']) ... if hasattr(os, 'system') and os.system('less 2>/dev/null') == 0: ... return lambda text: pipepager(text, 1) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1005]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def f() File " ", line 1 def f() ^ SyntaxError: invalid syntax >>> localhost[1006]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def f(): ... if os.environ.has_key(x): ... return lambda y: z(y, os.environ[x]) ... >>> def getpager(): ... if os.environ.has_key('PAGER'): ... return lambda text: pipepager(text, os.environ['PAGER']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1007]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... if os.environ.has_key(x): ... return lambda text: pipepager(text, os.environ[x]) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda text: pipepager(text, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1008]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def f(): ... if os.environ.has_key('x'): ... return lambda y: z(y, os.environ['x']) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda text: pipepager(text, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1009]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda y: z(y, os.environ['x']) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda text: z(text, os.environ['x']) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda y: pipepager(y, os.environ['x']) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda te: pipepager(te, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1010]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda t: pipepager(t, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1011]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda y: pipepager(y, os.environ['x']) ... >>> def getpager(): ... if os.environ.has_key('x'): ... return lambda h: pipepager(h, os.environ['x']) ... Fatal Python error: unknown scope for pipepager in getpager (1) Aborted (core dumped) localhost[1012]% localhost[1012]% python Python 2.1a2 (#20, Feb 3 2001, 20:40:19) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> code = '''def getpager(): ... if os.environ.has_key('x'): ... return lambda %s: pipepager(%s, os.environ['x'])''' >>> >>> import string >>> import os >>> for char in string.letters: ... f = open('/tmp/test.py', 'w') ... f.write(code % (char, char) + '\n') ... f.close() ... import sys ... sys.stderr.write('%s: ' % char) ... r = os.system('python /tmp/test.py > /dev/null') ... sys.stderr.write('%s\n' % r) ... a: 0 b: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 c: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 d: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 e: 0 f: 0 g: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 h: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 i: 0 j: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 k: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 l: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 m: 0 n: 0 o: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 p: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 q: 0 r: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 s: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 t: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 u: 0 v: 0 w: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 x: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 y: 0 z: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 A: 0 B: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 C: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 D: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 E: 0 F: 0 G: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 H: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 I: 0 J: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 K: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 L: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 M: 0 N: 0 O: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 P: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 Q: 0 R: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 S: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 T: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 U: 0 V: 0 W: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 X: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 Y: 0 Z: Fatal Python error: unknown scope for pipepager in getpager (1) 34304 >>> localhost[1013]% cat /tmp/multitest.py code = '''def getpager(): if os.environ.has_key('x'): return lambda %s: pipepager(%s, os.environ['x'])''' import string, os, sys results = {} for char in string.letters: f = open('/tmp/test.py', 'w') f.write(code % (char, char) + '\n') f.close() sys.stderr.write('%s: ' % char) status = os.system('python /tmp/test.py > /dev/null') >> 8 sys.stderr.write('%s\n' % status) results.setdefault(status, []).append(char) for status in results.keys(): if not status: print 'Python likes these letters:', else: print 'Status %d for these letters:' % status, print results[status] localhost[1014]% ./python /tmp/multitest.py a: 0 b: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 c: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 d: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 e: 0 f: 0 g: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 h: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 i: 0 j: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 k: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 l: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 m: 0 n: 0 o: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 p: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 q: 0 r: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 s: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 t: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 u: 0 v: 0 w: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 x: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 y: 0 z: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 A: 0 B: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 C: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 D: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 E: 0 F: 0 G: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 H: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 I: 0 J: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 K: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 L: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 M: 0 N: 0 O: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 P: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 Q: 0 R: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 S: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 T: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 U: 0 V: 0 W: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 X: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 Y: 0 Z: Fatal Python error: unknown scope for pipepager in getpager(1) in /tmp/test.py 134 Status 134 for these letters: ['b', 'c', 'd', 'g', 'h', 'j', 'k', 'l', 'o', 'p', 'r', 's', 't', 'w', 'x', 'z', 'B', 'C', 'D', 'G', 'H', 'J', 'K', 'L', 'O', 'P', 'R', 'S', 'T', 'W', 'X', 'Z'] Python likes these letters: ['a', 'e', 'f', 'i', 'm', 'n', 'q', 'u', 'v', 'y', 'A', 'E', 'F', 'I', 'M', 'N', 'Q', 'U', 'V', 'Y'] localhost[1015]% From ping at lfw.org Sun Feb 11 01:41:41 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 10 Feb 2001 16:41:41 -0800 (PST) Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: Message-ID: On Sat, 10 Feb 2001, Andy Robinson wrote: > > So far, noone has commented on this idea. > > > > I would like to go ahead and check in patch which passes through > > Unicode objects to the file-object's .write() method while leaving > > the standard str() call for all other objects in place. > > > I'm behind this in principle. Here's an example of why: > > >>> tokyo_utf8 = "??" # the kanji for Tokyo, trust me... > >>> print tokyo_utf8 # this is 8-bit and prints fine > ?????? > >>> tokyo_uni = codecs.utf_8_decode(tokyo_utf8)[0] > >>> print tokyo_uni # try to print the kanji > Traceback (innermost last): > File " ", line 1, in ? > UnicodeError: ASCII encoding error: ordinal not in range(128) Something like the following looks reasonable to me; the added complexity is that the file object now remembers an encoder/decoder pair in its state (the API might give the appearance of remembering just the codec name, but we want to avoid doing codecs.lookup() on every write), and uses it whenever write() is passed a Unicode object. >>> file = open('outputfile', 'w', 'utf-8') >>> file.encoding 'utf-8' >>> file.write(tokyo_uni) # tokyo_utf8 gets written to file >>> file.close() Open questions: - If an encoding is specified, should file.read() then always return Unicode objects? - If an encoding is specified, should file.write() only accept Unicode objects and not bytestrings? - Is the encoding attribute mutable? (I would prefer not, but then how to apply an encoding to sys.stdout?) Side question: i noticed that the Lib/encodings directory supports quite a few code pages, including Greek, Russian, but there are no ISO-2022 CJK or JIS codecs. Is this just because no one felt like writing one, or is there a reason not to include one? It seems to me it might be nice to include some codecs for the most common CJK encodings -- that recent note on the popularity of Python in Korea comes to mind. -- ?!ng Happiness comes more from loving than being loved; and often when our affection seems wounded it is is only our vanity bleeding. To love, and to be hurt often, and to love again--this is the brave and happy life. -- J. E. Buchrose From ping at lfw.org Sun Feb 11 02:42:49 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 10 Feb 2001 17:42:49 -0800 (PST) Subject: [Python-Dev] import succeeds on second try? Message-ID: This is weird: localhost[1118]% ll spam* -rw-r--r-- 1 ping users 69 Feb 10 17:40 spam.py localhost[1119]% ll eggs* /bin/ls: eggs*: No such file or directory localhost[1120]% cat spam.py a = 1 print 'hello' import eggs # no such file print 'goodbye' b = 2 localhost[1121]% python Python 2.1a2 (#22, Feb 10 2001, 16:15:14) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import spam hello Traceback (most recent call last): File " ", line 1, in ? File "spam.py", line 3, in ? import eggs # no such file ImportError: No module named eggs >>> import spam >>> dir(spam) ['__builtins__', '__doc__', '__file__', '__name__', 'a'] >>> localhost[1122]% ll spam* -rw-r--r-- 1 ping users 69 Feb 10 17:40 spam.py -rw-r--r-- 1 ping users 208 Feb 10 17:41 spam.pyc localhost[1123]% ll eggs* /bin/ls: eggs*: No such file or directory Why did Python write spam.pyc after the import failed? -- ?!ng Happiness comes more from loving than being loved; and often when our affection seems wounded it is is only our vanity bleeding. To love, and to be hurt often, and to love again--this is the brave and happy life. -- J. E. Buchrose From ping at lfw.org Sun Feb 11 03:20:30 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 10 Feb 2001 18:20:30 -0800 (PST) Subject: [Python-Dev] test_inspect fails again: segfault in compile Message-ID: Sorry to be the bearer of so much bad news today. When i run the tests for inspect.py, a recently-built Python crashes: localhost[1168]% !p python test_inspect.py Segmentation fault (core dumped) gdb says: (gdb) where #0 0x806021c in symtable_params (st=0x80e9678, n=0x8149340) at Python/compile.c:4633 #1 0x806004f in symtable_funcdef (st=0x80e9678, n=0x8111368) at Python/compile.c:4541 #2 0x805fc6e in symtable_node (st=0x80e9678, n=0x80eaac0) at Python/compile.c:4417 #3 0x8060007 in symtable_node (st=0x80e9678, n=0x811c1c0) at Python/compile.c:4528 #4 0x805f23e in symtable_build (c=0xbffff2a4, n=0x811c1c0) at Python/compile.c:3974 #5 0x805ee8a in jcompile (n=0x811c1c0, filename=0x81268e4 "@test", base=0x0) at Python/compile.c:3853 #6 0x805ed7c in PyNode_Compile (n=0x811c1c0, filename=0x81268e4 "@test") at Python/compile.c:3806 #7 0x8063476 in parse_source_module (pathname=0x81268e4 "@test", fp=0x81271c0) at Python/import.c:611 #8 0x8063637 in load_source_module (name=0x812a1dc "testmod", pathname=0x81268e4 "@test", fp=0x81271c0) at Python/import.c:731 #9 0x8065161 in imp_load_source (self=0x0, args=0x80e838c) at Python/import.c:2178 #10 0x8058655 in call_cfunction (func=0x8124a08, arg=0x80e838c, kw=0x0) at Python/ceval.c:2749 #11 0x8058550 in call_object (func=0x8124a08, arg=0x80e838c, kw=0x0) at Python/ceval.c:2703 #12 0x8058c61 in do_call (func=0x8124a08, pp_stack=0xbffff908, na=2, nk=0) at Python/ceval.c:3014 #13 0x8057228 in eval_code2 (co=0x815eff0, globals=0x80c3544, locals=0x80c3544, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:1895 #14 0x8054787 in PyEval_EvalCode (co=0x815eff0, globals=0x80c3544, locals=0x80c3544) at Python/ceval.c:336 #15 0x8068f44 in run_node (n=0x8106f30, filename=0xbffffbb4 "test_inspect.py", globals=0x80c3544, locals=0x80c3544) at Python/pythonrun.c:920 #16 0x8068f09 in run_err_node (n=0x8106f30, filename=0xbffffbb4 "test_inspect.py", globals=0x80c3544, locals=0x80c3544) at Python/pythonrun.c:908 #17 0x8068ee7 in PyRun_FileEx (fp=0x80bf6a8, filename=0xbffffbb4 "test_inspect.py", start=257, globals=0x80c3544, locals=0x80c3544, closeit=1) at Python/pythonrun.c:900 #18 0x80686bc in PyRun_SimpleFileEx (fp=0x80bf6a8, filename=0xbffffbb4 "test_inspect.py", closeit=1) at Python/pythonrun.c:613 #19 0x8068310 in PyRun_AnyFileEx (fp=0x80bf6a8, filename=0xbffffbb4 "test_inspect.py", closeit=1) at Python/pythonrun.c:467 #20 0x8051bb0 in Py_Main (argc=1, argv=0xbffffa84) at Modules/main.c:292 #21 0x80516d6 in main (argc=2, argv=0xbffffa84) at Modules/python.c:10 #22 0x40064cb3 in __libc_start_main (main=0x80516c8 , argc=2, argv=0xbffffa84, init=0x8050bd8 <_init>, fini=0x80968dc <_fini>, rtld_fini=0x4000a350 <_dl_fini>, stack_end=0xbffffa7c) at ../sysdeps/generic/libc-start.c:78 The contents of test_inspect.py and of @test (the Python module which test_inspect writes out and imports) are attached. n_lineno is 8, which points to the hairy line: def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h): The following smaller test case reproduces the error: Python 2.1a2 (#22, Feb 10 2001, 16:15:14) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h): ... pass ... Segmentation fault (core dumped) After further testing, it seems to come down to this: Python 2.1a2 (#22, Feb 10 2001, 16:15:14) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def spam(a, b): pass ... >>> def spam(a=3, b): pass ... SyntaxError: non-default argument follows default argument >>> def spam(a=3, b=4): pass ... >>> def spam(a, (b,)): pass ... >>> def spam(a=3, (b,)): pass ... Segmentation fault (core dumped) Python 2.1a2 (#22, Feb 10 2001, 16:15:14) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> def spam(a=3, (b,)=(4,)): pass ... Segmentation fault (core dumped) -- ?!ng Happiness comes more from loving than being loved; and often when our affection seems wounded it is is only our vanity bleeding. To love, and to be hurt often, and to love again--this is the brave and happy life. -- J. E. Buchrose -------------- next part -------------- source = '''# line 1 'A module docstring.' import sys, inspect # line 5 # line 7 def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h): eggs(b + d, c + f) # line 11 def eggs(x, y): "A docstring." global fr, st fr = inspect.currentframe() st = inspect.stack() p = x q = y / 0 # line 20 class StupidGit: """A longer, indented docstring.""" # line 27 def abuse(self, a, b, c): """Another \tdocstring containing \ttabs \t """ self.argue(a, b, c) # line 40 def argue(self, a, b, c): try: spam(a, b, c) except: self.ex = sys.exc_info() self.tr = inspect.trace() # line 48 class MalodorousPervert(StupidGit): pass class ParrotDroppings: pass class FesteringGob(MalodorousPervert, ParrotDroppings): pass ''' from test_support import TestFailed, TESTFN import sys, imp, os, string def test(assertion, message, *args): if not assertion: raise TestFailed, message % args import inspect file = open(TESTFN, 'w') file.write(source) file.close() mod = imp.load_source('testmod', TESTFN) def istest(func, exp): obj = eval(exp) test(func(obj), '%s(%s)' % (func.__name__, exp)) for other in [inspect.isbuiltin, inspect.isclass, inspect.iscode, inspect.isframe, inspect.isfunction, inspect.ismethod, inspect.ismodule, inspect.istraceback]: if other is not func: test(not other(obj), 'not %s(%s)' % (other.__name__, exp)) git = mod.StupidGit() try: 1/0 except: tb = sys.exc_traceback istest(inspect.isbuiltin, 'sys.exit') istest(inspect.isbuiltin, '[].append') istest(inspect.isclass, 'mod.StupidGit') istest(inspect.iscode, 'mod.spam.func_code') istest(inspect.isframe, 'tb.tb_frame') istest(inspect.isfunction, 'mod.spam') istest(inspect.ismethod, 'mod.StupidGit.abuse') istest(inspect.ismethod, 'git.argue') istest(inspect.ismodule, 'mod') istest(inspect.istraceback, 'tb') classes = inspect.getmembers(mod, inspect.isclass) test(classes == [('FesteringGob', mod.FesteringGob), ('MalodorousPervert', mod.MalodorousPervert), ('ParrotDroppings', mod.ParrotDroppings), ('StupidGit', mod.StupidGit)], 'class list') tree = inspect.getclasstree(map(lambda x: x[1], classes), 1) test(tree == [(mod.ParrotDroppings, ()), (mod.StupidGit, ()), [(mod.MalodorousPervert, (mod.StupidGit,)), [(mod.FesteringGob, (mod.MalodorousPervert, mod.ParrotDroppings)) ] ] ], 'class tree') functions = inspect.getmembers(mod, inspect.isfunction) test(functions == [('eggs', mod.eggs), ('spam', mod.spam)], 'function list') test(inspect.getdoc(mod) == 'A module docstring.', 'getdoc(mod)') test(inspect.getcomments(mod) == '# line 1\n', 'getcomments(mod)') test(inspect.getmodule(mod.StupidGit) == mod, 'getmodule(mod.StupidGit)') test(inspect.getfile(mod.StupidGit) == TESTFN, 'getfile(mod.StupidGit)') test(inspect.getsourcefile(mod.spam) == TESTFN, 'getsourcefile(mod.spam)') test(inspect.getsourcefile(git.abuse) == TESTFN, 'getsourcefile(git.abuse)') def sourcerange(top, bottom): lines = string.split(source, '\n') return string.join(lines[top-1:bottom], '\n') + '\n' test(inspect.getsource(git.abuse) == sourcerange(29, 39), 'getsource(git.abuse)') test(inspect.getsource(mod.StupidGit) == sourcerange(21, 46), 'getsource(mod.StupidGit)') test(inspect.getdoc(mod.StupidGit) == 'A longer,\n\nindented\n\ndocstring.', 'getdoc(mod.StupidGit)') test(inspect.getdoc(git.abuse) == 'Another\n\ndocstring\n\ncontaining\n\ntabs\n\n', 'getdoc(git.abuse)') test(inspect.getcomments(mod.StupidGit) == '# line 20\n', 'getcomments(mod.StupidGit)') args, varargs, varkw, defaults = inspect.getargspec(mod.eggs) test(args == ['x', 'y'], 'mod.eggs args') test(varargs == None, 'mod.eggs varargs') test(varkw == None, 'mod.eggs varkw') test(defaults == None, 'mod.eggs defaults') test(inspect.formatargspec(args, varargs, varkw, defaults) == '(x, y)', 'mod.eggs formatted argspec') args, varargs, varkw, defaults = inspect.getargspec(mod.spam) test(args == ['a', 'b', 'c', 'd', ['e', ['f']]], 'mod.spam args') test(varargs == 'g', 'mod.spam varargs') test(varkw == 'h', 'mod.spam varkw') test(defaults == (3, (4, (5,))), 'mod.spam defaults') test(inspect.formatargspec(args, varargs, varkw, defaults) == '(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h)', 'mod.spam formatted argspec') git.abuse(7, 8, 9) istest(inspect.istraceback, 'git.ex[2]') istest(inspect.isframe, 'mod.fr') test(len(git.tr) == 2, 'trace() length') test(git.tr[0][1:] == ('@test', 9, 'spam', [' eggs(b + d, c + f)\n'], 0), 'trace() row 1') test(git.tr[1][1:] == ('@test', 18, 'eggs', [' q = y / 0\n'], 0), 'trace() row 2') test(len(mod.st) >= 5, 'stack() length') test(mod.st[0][1:] == ('@test', 16, 'eggs', [' st = inspect.stack()\n'], 0), 'stack() row 1') test(mod.st[1][1:] == ('@test', 9, 'spam', [' eggs(b + d, c + f)\n'], 0), 'stack() row 2') test(mod.st[2][1:] == ('@test', 43, 'argue', [' spam(a, b, c)\n'], 0), 'stack() row 3') test(mod.st[3][1:] == ('@test', 39, 'abuse', [' self.argue(a, b, c)\n'], 0), 'stack() row 4') # row 4 is in test_inspect.py args, varargs, varkw, locals = inspect.getargvalues(mod.fr) test(args == ['x', 'y'], 'mod.fr args') test(varargs == None, 'mod.fr varargs') test(varkw == None, 'mod.fr varkw') test(locals == {'x': 11, 'p': 11, 'y': 14}, 'mod.fr locals') test(inspect.formatargvalues(args, varargs, varkw, locals) == '(x=11, y=14)', 'mod.fr formatted argvalues') args, varargs, varkw, locals = inspect.getargvalues(mod.fr.f_back) test(args == ['a', 'b', 'c', 'd', ['e', ['f']]], 'mod.fr.f_back args') test(varargs == 'g', 'mod.fr.f_back varargs') test(varkw == 'h', 'mod.fr.f_back varkw') test(inspect.formatargvalues(args, varargs, varkw, locals) == '(a=7, b=8, c=9, d=3, (e=4, (f=5,)), *g=(), **h={})', 'mod.fr.f_back formatted argvalues') os.unlink(TESTFN) -------------- next part -------------- # line 1 'A module docstring.' import sys, inspect # line 5 # line 7 def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h): eggs(b + d, c + f) # line 11 def eggs(x, y): "A docstring." global fr, st fr = inspect.currentframe() st = inspect.stack() p = x q = y / 0 # line 20 class StupidGit: """A longer, indented docstring.""" # line 27 def abuse(self, a, b, c): """Another docstring containing tabs """ self.argue(a, b, c) # line 40 def argue(self, a, b, c): try: spam(a, b, c) except: self.ex = sys.exc_info() self.tr = inspect.trace() # line 48 class MalodorousPervert(StupidGit): pass class ParrotDroppings: pass class FesteringGob(MalodorousPervert, ParrotDroppings): pass From guido at digicool.com Sun Feb 11 03:29:39 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 21:29:39 -0500 Subject: [Python-Dev] import succeeds on second try? In-Reply-To: Your message of "Sat, 10 Feb 2001 17:42:49 PST." References: Message-ID: <200102110229.VAA29050@cj20424-a.reston1.va.home.com> > This is weird: > > localhost[1118]% ll spam* > -rw-r--r-- 1 ping users 69 Feb 10 17:40 spam.py > localhost[1119]% ll eggs* > /bin/ls: eggs*: No such file or directory > localhost[1120]% cat spam.py > a = 1 > print 'hello' > import eggs # no such file > print 'goodbye' > b = 2 > localhost[1121]% python > Python 2.1a2 (#22, Feb 10 2001, 16:15:14) > [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 > Type "copyright", "credits" or "license" for more information. > >>> import spam > hello > Traceback (most recent call last): > File " ", line 1, in ? > File "spam.py", line 3, in ? > import eggs # no such file > ImportError: No module named eggs > >>> import spam > >>> dir(spam) > ['__builtins__', '__doc__', '__file__', '__name__', 'a'] > >>> > localhost[1122]% ll spam* > -rw-r--r-- 1 ping users 69 Feb 10 17:40 spam.py > -rw-r--r-- 1 ping users 208 Feb 10 17:41 spam.pyc > localhost[1123]% ll eggs* > /bin/ls: eggs*: No such file or directory > > Why did Python write spam.pyc after the import failed? That's standard stuff; happens all the time. 1. The module gets compiled to bytecode, and the compiled bytecode gets written to the .pyc file, before any attempt to execute is. 2. The spam module gets entered into sys.modules at the *start* of its execution, for a number of reasons having to do with mutually recursive modules. 3. The execution fails on the "import eggs" but that doesn't undo the sys.modules assignment. 4. The second import of spam finds an incomplete module in sys.modyles, but doesn't know that, so returns it. --Guido van Rossum (home page: http://www.python.org/~guido/) From ping at lfw.org Sun Feb 11 03:30:46 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sat, 10 Feb 2001 18:30:46 -0800 (PST) Subject: [Python-Dev] import succeeds on second try? In-Reply-To: <200102110229.VAA29050@cj20424-a.reston1.va.home.com> Message-ID: On Sat, 10 Feb 2001, Guido van Rossum wrote: > > That's standard stuff; happens all the time. Hrmm... it makes me feel icky. > 1. The module gets compiled to bytecode, and the compiled bytecode > gets written to the .pyc file, before any attempt to execute is. > > 2. The spam module gets entered into sys.modules at the *start* of its > execution, for a number of reasons having to do with mutually > recursive modules. > > 3. The execution fails on the "import eggs" but that doesn't undo the > sys.modules assignment. > > 4. The second import of spam finds an incomplete module in > sys.modyles, but doesn't know that, so returns it. Is there a reason not to insert step 3.5? 3.5. If the import fails, remove the incomplete module from sys.modules. -- ?!ng From guido at digicool.com Sun Feb 11 04:00:31 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 10 Feb 2001 22:00:31 -0500 Subject: [Python-Dev] import succeeds on second try? In-Reply-To: Your message of "Sat, 10 Feb 2001 18:30:46 PST." References: Message-ID: <200102110300.WAA29163@cj20424-a.reston1.va.home.com> > On Sat, 10 Feb 2001, Guido van Rossum wrote: > > > > That's standard stuff; happens all the time. > > Hrmm... it makes me feel icky. Maybe, but so does the alternative (to me, anyway). > > 1. The module gets compiled to bytecode, and the compiled bytecode > > gets written to the .pyc file, before any attempt to execute is. > > > > 2. The spam module gets entered into sys.modules at the *start* of its > > execution, for a number of reasons having to do with mutually > > recursive modules. > > > > 3. The execution fails on the "import eggs" but that doesn't undo the > > sys.modules assignment. > > > > 4. The second import of spam finds an incomplete module in > > sys.modyles, but doesn't know that, so returns it. > > Is there a reason not to insert step 3.5? > > 3.5. If the import fails, remove the incomplete module from sys.modules. It's hard to prove that there are no other references to it, e.g. spam could have imported bacon which imports fine and imports spam (for a later recursive call). Then a second try to import spam would import bacon again but that bacon would have a reference to the first, incomplete copy of spam. In general, if I can help it, I want to be careful that I don't have multiple module objects claiming to be the same module around, because that multiplicity will come back to bite you when it matters that they are the same. Also, deleting the evidence makes it harder to inspect the smoking remains in a debugger. --Guido van Rossum (home page: http://www.python.org/~guido/) From andy at reportlab.com Sun Feb 11 10:18:55 2001 From: andy at reportlab.com (Andy Robinson) Date: Sun, 11 Feb 2001 09:18:55 -0000 Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: Message-ID: > Open questions: > > - If an encoding is specified, should file.read() then > always return Unicode objects? > > - If an encoding is specified, should file.write() only > accept Unicode objects and not bytestrings? > > - Is the encoding attribute mutable? (I would prefer not, > but then how to apply an encoding to sys.stdout?) Right now, codecs.open returns an instance of codecs.StreamReaderWriter, not a native file object. It has methods that look like the ones on a file, but they tpically accept or return Unicode strings instead of binary ones. This feels right to me and is what Java does; if you want to switch encoding on sys.stdout, you are not really doing anything to the file object, just switching the wrapper you use. There is much discussion on the i18n sig about 'unifying' binary and Unicode strings at the moment. > Side question: i noticed that the Lib/encodings directory supports > quite a few code pages, including Greek, Russian, but there are no > ISO-2022 CJK or JIS codecs. Is this just because no one felt like > writing one, or is there a reason not to include one? It seems to > me it might be nice to include some codecs for the most common CJK > encodings -- that recent note on the popularity of Python in Korea > comes to mind. There have been 3 contributions to Asian codecs on the i18n sig in the last six months (pythoncodecs.sourceforge.net) one C, two J and one K - but some authors are uncomfortable with Python-style licenses. They need tying together into one integrated package with a test suite. After a 5-month-long project which tied me up, I have finally started ooking at this. The general feeling was that the Asian codecs package should be an optional download, but if we can get them fully tested and do some compression magic it would be nice to get them in the box one day. - Andy Robinson From tim.one at home.com Sun Feb 11 10:20:35 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 11 Feb 2001 04:20:35 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: <20010210104342.A20657@thyrsus.com> Message-ID: [Guido] > - New schema for .pyc magic number? (Eric, Tim) [Eric] > It looked to me like Tim had a good scheme, but he never specified > the latter (integrity-check) part of the header). Heh -- I stopped after the first 4 bytes! Didn't intend to do more (the first 4 are the hardest <0.25 wink>). Was hoping Ping would rework his ideas into the framework /F suggested (next 4 bytes is a timestamp, then a new marshal type containing "everything else"). I doubt that can make it in for 2.1, though, unless someone works intensely on it this week. rules-me-out-as-it's-not-a-crisis-until-2002-ly y'rs - tim From tim.one at home.com Sun Feb 11 10:20:37 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 11 Feb 2001 04:20:37 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: <14980.51791.171007.616771@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: Other issues: + Make "global x" textually following any reference to x (in the same scope) a compile-time error. Unclear whether def f(): global x global x is an error under that rule (i.e., does appearance in a global stmt constitute "a reference"?). Ditto for def f(): global x, x My opinion: declarations aren't references, and redundant declarations don't hurt (so "no, not an error" to both). Change Ref Man accordingly (i.e., this plugs a hole in the *language* defn, it's not just a question of implementation accident du jour anymore). + Spew warning for "import *" and "exec" at function scope, or change Ref Man to spell out when this is and isn't guaranteed to work. Guido appeared to agree with both of those. From mal at lemburg.com Sun Feb 11 15:33:39 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sun, 11 Feb 2001 15:33:39 +0100 Subject: [Python-Dev] .pyc magic (Python 2.1 release schedule) References: Message-ID: <3A86A2C3.1A64E0B0@lemburg.com> Tim Peters wrote: > > [Guido] > > - New schema for .pyc magic number? (Eric, Tim) > > [Eric] > > It looked to me like Tim had a good scheme, but he never specified > > the latter (integrity-check) part of the header). > > Heh -- I stopped after the first 4 bytes! Didn't intend to do more (the > first 4 are the hardest <0.25 wink>). Was hoping Ping would rework his > ideas into the framework /F suggested (next 4 bytes is a timestamp, then a > new marshal type containing "everything else"). > > I doubt that can make it in for 2.1, though, unless someone works intensely > on it this week. Just a side-note: the flags for e.g. -U ought to also provide a way to store the encoding used by the compiler and perhaps even the compiler version/name. Don't think it's a good idea to put this into 2.1, though, since it needs a PEP :-) -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From mwh21 at cam.ac.uk Sun Feb 11 17:23:25 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 11 Feb 2001 16:23:25 +0000 Subject: [Python-Dev] test_inspect fails again: segfault in compile In-Reply-To: Ka-Ping Yee's message of "Sat, 10 Feb 2001 18:20:30 -0800 (PST)" References: Message-ID: Ka-Ping Yee writes: > After further testing, it seems to come down to this: > > Python 2.1a2 (#22, Feb 10 2001, 16:15:14) > [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 > Type "copyright", "credits" or "license" for more information. > >>> def spam(a, b): pass > ... > >>> def spam(a=3, b): pass > ... > SyntaxError: non-default argument follows default argument > >>> def spam(a=3, b=4): pass > ... > >>> def spam(a, (b,)): pass > ... > >>> def spam(a=3, (b,)): pass > ... > Segmentation fault (core dumped) > > Python 2.1a2 (#22, Feb 10 2001, 16:15:14) > [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 > Type "copyright", "credits" or "license" for more information. > >>> def spam(a=3, (b,)=(4,)): pass > ... > Segmentation fault (core dumped) > Try this: Index: compile.c =================================================================== RCS file: /cvsroot/python/python/dist/src/Python/compile.c,v retrieving revision 2.162 diff -c -r2.162 compile.c *** compile.c 2001/02/09 22:55:26 2.162 --- compile.c 2001/02/11 16:19:02 *************** *** 4629,4635 **** for (j = 0; j <= complex; j++) { c = CHILD(n, j); if (TYPE(c) == COMMA) ! c = CHILD(n, ++j); if (TYPE(CHILD(c, 0)) == LPAR) symtable_params_fplist(st, CHILD(c, 1)); } --- 4629,4637 ---- for (j = 0; j <= complex; j++) { c = CHILD(n, j); if (TYPE(c) == COMMA) ! c = CHILD(n, ++j); ! else if (TYPE(c) == EQUAL) ! c = CHILD(n, j += 3); if (TYPE(CHILD(c, 0)) == LPAR) symtable_params_fplist(st, CHILD(c, 1)); } Clearly there should be a test for this - where? test_extcall isn't really appropriate, but I can't think of a better place. Maybe it should be renamed to test_funcall.py and then a test for this can go in. Cheers, M. -- Some people say that a monkey would bang out the complete works of Shakespeare on a typewriter give an unlimited amount of time. In the meantime, what they would probably produce is a valid sendmail configuration file. -- Nicholas Petreley From thomas at xs4all.net Sun Feb 11 23:12:36 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sun, 11 Feb 2001 23:12:36 +0100 Subject: [Python-Dev] dl module In-Reply-To: ; from akuchlin@mems-exchange.org on Fri, Feb 09, 2001 at 02:35:26PM -0500 References: Message-ID: <20010211231236.A4924@xs4all.nl> On Fri, Feb 09, 2001 at 02:35:26PM -0500, Andrew Kuchling wrote: > The dl module isn't automatically compiled by setup.py, and at least > one patch on SourceForge adds it. > Question: should it be compiled as a standard module? Using it can, > according to the comments, cause core dumps if you're not careful. -1. The dl module is not just crashy, it's also potentially dangerous. And the chance of the setup.py attempt to add it working on most platforms is low at best -- 'manual' dynamic linking is about as portable as threads ;-P -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Mon Feb 12 01:08:37 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 11 Feb 2001 19:08:37 -0500 Subject: [Python-Dev] Cool link Message-ID: Mentioned on c.l.py: http://cseng.aw.com/book/related/0,3833,0805311912+20,00.html This is the full text of "Advanced Programming Language Design", available online a chapter at a time in PDF format. Chapter 2 (Control Structures) has a nice intro to coroutines in Simula and iterators in CLU, including a funky implementation of the latter via C macros that assumes you can get away with longjmp()'ing "up the stack" (i.e., jumping back into a routine that has already been longjmp()'ed out of). Also an intro to continuations in Io: CLU iterators are truly elegant. They are clear and expressive. They provide a single, uniform way to program all loops. They can be implemented efficiently on a single stack. ... Io continuations provide a lot of food for thought. They spring from an attempt to gain utter simplicity in a programming language. They seem to be quite expressive, but they suffer from a lack of clarity. No matter how often I have stared at the examples of Io programming, I have always had to resort to traces to figure out what is happening. I think they are just too obscure to ever be valuable. Of course in the handful of other languages that support them, continuations are a wizard-level implementation hook for building nicer abstractions. In Io you can't even write a loop without manipulating continuations explicitly. takes-all-kinds-ly y'rs - tim From thomas at xs4all.net Mon Feb 12 01:42:52 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Mon, 12 Feb 2001 01:42:52 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src Makefile.pre.in,1.14,1.15 In-Reply-To: ; from jhylton@users.sourceforge.net on Fri, Feb 09, 2001 at 02:22:20PM -0800 References: Message-ID: <20010212014251.B4924@xs4all.nl> On Fri, Feb 09, 2001 at 02:22:20PM -0800, Jeremy Hylton wrote: > Log Message: > Relax the rules for using 'from ... import *' and exec in the presence > of nested functions. Either is allowed in a function if it contains > no defs or lambdas or the defs and lambdas it contains have no free > variables. If a function is itself nested and has free variables, > either is illegal. Wow. Thank you, Jeremy, I'm very happy with that! It's even better than I dared hope for, since it means *most* lambdas (the simple ones that don't reference globals) won't break functions using 'from .. import *', and the ones that do reference globals can be fixed by doing 'global_var=global_var' in the lambda argument list ( -- maybe we should put that in the docs ?) +1-on-suffering-fools-a-whole-release-before-punishing-them-for-it-ly y'rs, -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From greg at cosc.canterbury.ac.nz Mon Feb 12 02:05:54 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Mon, 12 Feb 2001 14:05:54 +1300 (NZDT) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src Makefile.pre.in,1.14,1.15 In-Reply-To: <20010212014251.B4924@xs4all.nl> Message-ID: <200102120105.OAA05106@s454.cosc.canterbury.ac.nz> Jeremy Hylton: > Relax the rules for using 'from ... import *' and exec in the presence > of nested functions. Either is allowed in a function if it contains > no defs or lambdas or the defs and lambdas it contains have no free > variables. Seems to me the rules could be relaxed even further than that. Simply say that if an exec or import-* introduces any new names into an intermediate scope, then tough luck, they won't be visible to any nested functions. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Mon Feb 12 05:58:48 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 11 Feb 2001 23:58:48 -0500 Subject: [Python-Dev] PEPS, version control, release intervals In-Reply-To: <14976.5900.472169.467422@nem-srvr.stsci.edu> Message-ID: [Paul Barrett] > ... > I think people are moving to 2.0, but not at the rate of > keeping-up with the current release cycle. It varies by individual. > By the time 2/3 of them have installed 2.0, 2.1 will be released. No idea. Perhaps it's 60%, perhaps 90%, perhaps 10% -- we have no way to tell. FWIW, we almost never see a bug report against 1.5.2 anymore, and bug reports are about the only hard feedback we get. > So what's the point of installing 2.0, when a few weeks later, > you have to install 2.1? Overlooking that you don't have to install anything, the point also varies by individual, from new-feature envy to finally getting some 1.5.2 bug off your back. > The situation at our institution is a good indicator of this: 2.0 > becomes the default this week. Despite you challenging them with "what's the point?" ? Your organization's adoption schedule need not have anything in common with Python's release schedule, and it sounds like your organization moves slowly enough that you may want to skip 2.1 and wait for 2.2. Fine by me! Do you see harm in that? It's not like we're counting on upgrade fees to fund the next round of development. From guido at digicool.com Mon Feb 12 15:53:30 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 09:53:30 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: Your message of "Sun, 11 Feb 2001 04:20:37 EST." References: Message-ID: <200102121453.JAA06774@cj20424-a.reston1.va.home.com> > Other issues: > > + Make "global x" textually following any reference to x (in the > same scope) a compile-time error. Unclear whether > > def f(): > global x > global x > > is an error under that rule (i.e., does appearance in a global > stmt constitute "a reference"?). Ditto for > > def f(): > global x, x > > My opinion: declarations aren't references, and redundant > declarations don't hurt (so "no, not an error" to both). > > Change Ref Man accordingly (i.e., this plugs a hole in the > *language* defn, it's not just a question of implementation > accident du jour anymore). Agreed. > + Spew warning for "import *" and "exec" at function scope, or > change Ref Man to spell out when this is and isn't guaranteed > to work. Ah, yes! A warning! That would be great! > Guido appeared to agree with both of those. Can't recall when we discussed these, but yes, after some introspection I still appear to agree. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 12 15:59:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 09:59:11 -0500 Subject: [Python-Dev] Removing the implicit str() call from printing API In-Reply-To: Your message of "Sat, 10 Feb 2001 23:43:37 +0100." <3A85C419.99EDCF14@lemburg.com> References: <3A83F7DA.A94AB88E@lemburg.com> <3A85377B.BC6EAB9B@lemburg.com> <010f01c09361$8ff82910$e46940d5@hagrid> <200102101403.JAA27043@cj20424-a.reston1.va.home.com> <3A854EA6.B8A8F7E2@lemburg.com> <200102101949.OAA28167@cj20424-a.reston1.va.home.com> <3A85C419.99EDCF14@lemburg.com> Message-ID: <200102121459.JAA06804@cj20424-a.reston1.va.home.com> > > > Ok, I'll postpone this for 2.2 then... don't want to disappoint > > > our BDFL ;-) > > > > The alternative would be for you to summarize why the proposed change > > can't possibly break code, this late in the 2.1 release game. :-) > > Well, the only code it could possibly break is code which > > 1. expects a unique string object as argument What does this mean? Code that checks whether its argument "is" a well-known string? > 2. uses the s# parser marker and is used with an Unicode object > containing non-ASCII characters > > Unfortunately, I'm not sure about how much code is out there > which assumes 1. cStringIO.c is one example and given its > heritage, there probably is a lot more in the Zope camp ;-) I still don't have a clear idea of what changes you propose, but I'm confident we'll get to that after 2.1 is release. :-) > > > Perhaps we should rethink the whole complicated printing machinery > > > in Python for 2.2 and come up with a more generic solution to the > > > problem of letting to-be-printed objects pass through to the > > > stream objects ?! > > > > Yes, please! I'd love it if you could write up a PEP that analyzes > > the issues and proposes a solution. (Without an analysis of the > > issues, there's not much point in proposing a solution, IMO.) > > Ok... on the plane to the conference, maybe. That's cool. It's amazing how much email a face-to-face meeting can be worth! > > > Note that this is needed in order to be able to redirect sys.stdout > > > to a codec which then converts Unicode to some external encoding. > > > Currently this is not possible due to the implicit str() call in > > > PyObject_Print(). > > > > Excellent. I agree that it's a shame that Unicode I/O is so hard at > > the moment. > > Since this is what we're after here, we might as well consider > possibilities to get the input side of things equally in line > with the codec idea, e.g. what would happen if .read() returns > a Unicode object ? That seems much less problematic, since there are no system APIs that need to be changed. Code that can deal with Unicode will be happy. Other code may break. Ideally, code that doesn't know how to deal with Unicode won't break if the Unicode-encoded input in fact only contains ASCII. --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Mon Feb 12 16:33:00 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 12 Feb 2001 10:33:00 -0500 (EST) Subject: [Python-Dev] Re: Fatal scoping error from the twilight zone In-Reply-To: References: Message-ID: <14984.556.138950.289857@w221.z064000254.bwi-md.dsl.cnc.net> I can reproduce the problem, but I think the only solution is to add a section to the ref manual explaining that only the letters a, e, f, i, m, n, q, u, v, and y are legal in that position. In other words, I'm still trying to figure out what is happening. Jeremy From jeremy at alum.mit.edu Mon Feb 12 17:01:59 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 12 Feb 2001 11:01:59 -0500 (EST) Subject: [Python-Dev] Re: Fatal scoping error from the twilight zone In-Reply-To: <14984.556.138950.289857@w221.z064000254.bwi-md.dsl.cnc.net> References: <14984.556.138950.289857@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14984.2295.460544.871532@w221.z064000254.bwi-md.dsl.cnc.net> The bug was easy to fix after all. I figured the problem had to be related to dictionary traversal, because that was the only sensible explanation for why the specific letter mattered; different letters have different hash values, so the dictionary ends up storing names in a different order. The problem, fixed in rev. 2.163 of compile.c, was caused by iterating over a dictionary using PyDict_Next() and updating it at the same time. The updates are now deferred until the iteration is done. Jeremy From guido at digicool.com Mon Feb 12 17:12:41 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 11:12:41 -0500 Subject: [Python-Dev] Re: Fatal scoping error from the twilight zone In-Reply-To: Your message of "Mon, 12 Feb 2001 11:01:59 EST." <14984.2295.460544.871532@w221.z064000254.bwi-md.dsl.cnc.net> References: <14984.556.138950.289857@w221.z064000254.bwi-md.dsl.cnc.net> <14984.2295.460544.871532@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102121612.LAA07332@cj20424-a.reston1.va.home.com> > The problem, fixed in rev. 2.163 of compile.c, was caused by iterating > over a dictionary using PyDict_Next() and updating it at the same > time. The updates are now deferred until the iteration is done. Ha! This is excellent anecdotal evidence that "for key in dict", if we ever introduce it, should disallow updates of the dict while in the loop! --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin at cnri.reston.va.us Mon Feb 12 17:28:08 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 12 Feb 2001 11:28:08 -0500 Subject: [Python-Dev] Cool link In-Reply-To: ; from tim.one@home.com on Sun, Feb 11, 2001 at 07:08:37PM -0500 References: Message-ID: <20010212112808.C3637@thrak.cnri.reston.va.us> On Sun, Feb 11, 2001 at 07:08:37PM -0500, Tim Peters wrote: >are a wizard-level implementation hook for building nicer abstractions. In >Io you can't even write a loop without manipulating continuations >explicitly. Note that, as Finkel mentions somewhere near the end of the book, Io was never actually implemented. (The linked list example is certainly head-exploding, I must say...) --amk From gvwilson at ca.baltimore.com Mon Feb 12 17:46:18 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 12 Feb 2001 11:46:18 -0500 Subject: [Python-Dev] Set and Iterator BOFs Message-ID: <000901c09513$52ade820$770a0a0a@nevex.com> Barbara Fuller at Foretec has set up two mailing lists: Iterator-BOF at python9.org (for March 6) Set-BOF at python9.org (for March 7) for discussing admin related to these BOFs. If you are planning to attend, please send mail to the list, so that she can plan room allocation, make sure we get seated first for lunch, etc. Greg From guido at digicool.com Mon Feb 12 17:57:35 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 11:57:35 -0500 Subject: [Python-Dev] Set and Iterator BOFs In-Reply-To: Your message of "Mon, 12 Feb 2001 11:46:18 EST." <000901c09513$52ade820$770a0a0a@nevex.com> References: <000901c09513$52ade820$770a0a0a@nevex.com> Message-ID: <200102121657.LAA07606@cj20424-a.reston1.va.home.com> > Barbara Fuller at Foretec has set up two mailing lists: > > Iterator-BOF at python9.org (for March 6) > Set-BOF at python9.org (for March 7) > > for discussing admin related to these BOFs. If you are > planning to attend, please send mail to the list, so that > she can plan room allocation, make sure we get seated first > for lunch, etc. Assuming these aren't mailman lists, how does one subscribe? Or are these just aliases that go to a fixed recipient (e.g. you or Barbara)? --Guido van Rossum (home page: http://www.python.org/~guido/) From gvwilson at ca.baltimore.com Mon Feb 12 18:14:02 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 12 Feb 2001 12:14:02 -0500 Subject: [Python-Dev] re: cool link In-Reply-To: Message-ID: <000b01c09517$3283f8b0$770a0a0a@nevex.com> > From: "Tim Peters" > > Mentioned on c.l.py: > > http://cseng.aw.com/book/related/0,3833,0805311912+20,00.html > > This is the full text of "Advanced Programming Language > Design", available online a chapter at a time in PDF format. Greg Wilson: From gvwilson at ca.baltimore.com Mon Feb 12 18:17:07 2001 From: gvwilson at ca.baltimore.com (Greg Wilson) Date: Mon, 12 Feb 2001 12:17:07 -0500 Subject: [Python-Dev] re: Set and Iterator BOFs In-Reply-To: Message-ID: <000c01c09517$a0f8f2f0$770a0a0a@nevex.com> > > Greg Wilson > > Barbara Fuller at Foretec has set up two mailing lists: > > > > Iterator-BOF at python9.org (for March 6) > > Set-BOF at python9.org (for March 7) > > > > for discussing admin related to these BOFs. > Guido van Rossum: > Assuming these aren't mailman lists, how does one subscribe? Or are > these just aliases that go to a fixed recipient (e.g. you or Barbara)? The latter --- these are for Barbara's convenience, so that she can get a feel for how many people will need to be hustled through lunch. Thanks, Greg p.s. I have set up http://groups.yahoo.com/group/python-iter and http://groups.yahoo.com/group/python-sets; Guido, would you prefer discussion of sets and iterators to be moved to these lists, or to stay on python-dev? From guido at digicool.com Mon Feb 12 18:24:32 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 12:24:32 -0500 Subject: [Python-Dev] re: Set and Iterator BOFs In-Reply-To: Your message of "Mon, 12 Feb 2001 12:17:07 EST." <000c01c09517$a0f8f2f0$770a0a0a@nevex.com> References: <000c01c09517$a0f8f2f0$770a0a0a@nevex.com> Message-ID: <200102121724.MAA07893@cj20424-a.reston1.va.home.com> > p.s. I have set up http://groups.yahoo.com/group/python-iter and > http://groups.yahoo.com/group/python-sets; Guido, would you prefer > discussion of sets and iterators to be moved to these lists, or to > stay on python-dev? Let's move these to egroups for now. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Mon Feb 12 22:01:07 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 12 Feb 2001 16:01:07 -0500 Subject: [Python-Dev] Python 2.1 release schedule In-Reply-To: <200102121453.JAA06774@cj20424-a.reston1.va.home.com> Message-ID: [Guido, on making "global x" an error sometimes, and warning on "import * / exec" sometimes ] > Can't recall when we discussed these, but yes, after some > introspection I still appear to agree. Heh heh. Herewith your entire half of the discussion : From: guido at cj20424-a.reston1.va.home.com Sent: Friday, February 09, 2001 3:12 PM To: Tim Peters Cc: Jeremy Hylton Subject: Re: [Python-Dev] RE: global, was Re: None assigment Agreed. --Guido van Rossum (home page: http://www.python.org/~guido/) This probably wasn't enough detail for Jeremy to act on, but was enough for me to complete channeling you . The tail end of the msg to which you replied was: +1 on making this ["global x" sometimes] an error now. And if 2.1 is relaxed to again allow "import *" at function scope in some cases, either that should at least raise a warning, or the Ref Man should be changed to say that's a defined use of the language. not-often-you-see-5-quoted-lines-each-begin-with-a-2-character- thing-ly y'rs - tim From akuchlin at mems-exchange.org Mon Feb 12 22:26:42 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Mon, 12 Feb 2001 16:26:42 -0500 Subject: [Python-Dev] Unit testing (again) Message-ID: I was pleased to see that the 2.1 release schedule lists "unit testing" as one of the open issues. How is this going to be decided? Voting? BDFL fiat? --amk From guido at digicool.com Mon Feb 12 22:37:00 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 16:37:00 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: Your message of "Mon, 12 Feb 2001 16:26:42 EST." References: Message-ID: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> > I was pleased to see that the 2.1 release schedule lists "unit > testing" as one of the open issues. How is this going to be decided? > Voting? BDFL fiat? BDFL fiat: most likely we'll be integrating PyUnit, whose author thinks this is a great idea. We'll be extending it to reduce the amount of boilerplate you have to type for new tests, and to optionally support the style of testing that Quixote's unit test package favors. This style (where the tests are given as string literals) seems to be really impopular with most people I've spoken to, but we're going to support it anyhow because there may also be cases where it's appropriate. I'm not sure however how much we'll get done for 2.1; maybe we'll just integrate the current PyUnit CVS tree. --Guido van Rossum (home page: http://www.python.org/~guido/) From tismer at tismer.com Mon Feb 12 22:48:58 2001 From: tismer at tismer.com (Christian Tismer) Date: Mon, 12 Feb 2001 22:48:58 +0100 Subject: [Python-Dev] Cool link References: Message-ID: <3A885A4A.E1AB42FF@tismer.com> Tim Peters wrote: > > Mentioned on c.l.py: > > http://cseng.aw.com/book/related/0,3833,0805311912+20,00.html > > This is the full text of "Advanced Programming Language Design", available > online a chapter at a time in PDF format. > > Chapter 2 (Control Structures) has a nice intro to coroutines in Simula and > iterators in CLU, including a funky implementation of the latter via C > macros that assumes you can get away with longjmp()'ing "up the stack" > (i.e., jumping back into a routine that has already been longjmp()'ed out > of). Also an intro to continuations in Io: > > CLU iterators are truly elegant. They are clear and expressive. > They provide a single, uniform way to program all loops. They > can be implemented efficiently on a single stack. > ... > Io continuations provide a lot of food for thought. They spring > from an attempt to gain utter simplicity in a programming > language. They seem to be quite expressive, but they suffer > from a lack of clarity. No matter how often I have stared at > the examples of Io programming, I have always had to resort to > traces to figure out what is happening. I think they are just > too obscure to ever be valuable. Yes, this is a nice and readable text. But, the latter paragraph shows that the author is able to spell continuations without understanding them. Well, probably he does understand them, but his readers don't. At least this paragraph shows that he has an idea: """ Given that continuations are very powerful, why are they not a part of ev-ery language? Why do they not replace the conventional mechanisms of con-trol structure? First, continuations are extremely confusing. The examples given in this section are almost impossible to understand without tracing, and even then, the general flow of control is lost in the details of procedure calls and parameter passing. With experience, programmers might become comfortable with them; however, continuations are so similar to gotos (with the added complexity of parameters) that they make it difficult to structure programs. """ I could understand the examples without tracing, and they were by no means confusing, but very clear. I believe the above message coming from a stack-educated brain (as we almost are) which is about to get the point, but still is not there. > Of course in the handful of other languages that support them, continuations > are a wizard-level implementation hook for building nicer abstractions. In > Io you can't even write a loop without manipulating continuations > explicitly. What is your message? Do you want me to react? Well, here the expected reaction, nothing new. I already have given up pushing continuations for Python; not because continuations are wrong, but too powerful for most needs and too simple (read "obscure") for most programmers. I will provide native implementations of coroutines & co in one or two months (sponsored work), and continuation support will be conditionally compiled into Stackless. I still regard them useful for education (Raphael Finkel would argue differently after playing with Python continuations), but their support should not go into the Python standard. I'm currently splitting the compromizes in ceval.c by being continuation related or not. My claim that this makes up 10 percent of the code or less seems to hold. chewing-on-the-red-herring-ly y'rs - chris -- Christian Tismer :^) Mission Impossible 5oftware : Have a break! Take a ride on Python's Kaunstr. 26 : *Starship* http://starship.python.net 14163 Berlin : PGP key -> http://wwwkeys.pgp.net PGP Fingerprint E182 71C7 1A9D 66E9 9D15 D3CC D4D7 93E2 1FAE F6DF where do you want to jump today? http://www.stackless.com From Jason.Tishler at dothill.com Mon Feb 12 23:08:39 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Mon, 12 Feb 2001 17:08:39 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: ; from tim.one@home.com on Mon, Feb 05, 2001 at 04:01:49PM -0500 References: <20010205122721.J812@dothill.com> Message-ID: <20010212170839.F281@dothill.com> [Sorry for letting this thread hang, but I'm back from paternity leave so I will be more responsive now. Well, at least between normal business hours that is.] On Mon, Feb 05, 2001 at 04:01:49PM -0500, Tim Peters wrote: > Basic sanity requires that Python do the same > thing on *all* case-insensitive case-preserving filesystems, to the fullest > extent possible. Python's DOS/Windows behavior has priority by a decade. > I'm deadly opposed to making a special wart for Cygwin (or the Mac), but am > in favor of changing it on Windows too. May be if we can agree on how import should behave, then we will have a better chance of determining the best way to implement it sans warts? So, along these lines I propose that import from a file behave the same on both case-sensitive and case-insensitive/case-preserving filesystems. This will help to maximize portability between platforms like UNIX, Windows, and Mac. Unfortunately, something like the PYTHONCASEOK caveat still needs to be preserved for case-destroying filesystems. Any feedback is appreciated -- I'm just trying to help get closure on this issue by Beta 1. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From akuchlin at cnri.reston.va.us Mon Feb 12 23:18:00 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 12 Feb 2001 17:18:00 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102122137.QAA09818@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 12, 2001 at 04:37:00PM -0500 References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> Message-ID: <20010212171800.D3900@thrak.cnri.reston.va.us> On Mon, Feb 12, 2001 at 04:37:00PM -0500, Guido van Rossum wrote: >I'm not sure however how much we'll get done for 2.1; maybe we'll just >integrate the current PyUnit CVS tree. I'd really like to have unit testing in 2.1 that I can actually use. PyUnit as it stands is clunky enough that I'd still use the Quixote framework in my code; the advantage of being included with Python would not overcome its disadvantages for me. Have you got a list of desired changes? And should the changes be discussed on python-dev or the PyUnit list? --amk From guido at digicool.com Mon Feb 12 23:21:14 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 17:21:14 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: Your message of "Mon, 12 Feb 2001 17:18:00 EST." <20010212171800.D3900@thrak.cnri.reston.va.us> References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> Message-ID: <200102122221.RAA11205@cj20424-a.reston1.va.home.com> > I'd really like to have unit testing in 2.1 that I can actually use. > PyUnit as it stands is clunky enough that I'd still use the Quixote > framework in my code; the advantage of being included with Python > would not overcome its disadvantages for me. Have you got a list of > desired changes? And should the changes be discussed on python-dev or > the PyUnit list? I'm just reporting what I've heard on our group meetings. Fred Drake and Jeremy Hylton are in charge of getting this done. You can catch their ear on python-dev; I'm not sure about the PyUnit list. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Mon Feb 12 23:23:21 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 12 Feb 2001 17:23:21 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: Your message of "Mon, 12 Feb 2001 17:08:39 EST." <20010212170839.F281@dothill.com> References: <20010205122721.J812@dothill.com> <20010212170839.F281@dothill.com> Message-ID: <200102122223.RAA11224@cj20424-a.reston1.va.home.com> > [Sorry for letting this thread hang, but I'm back from paternity leave > so I will be more responsive now. Well, at least between normal business > hours that is.] > > On Mon, Feb 05, 2001 at 04:01:49PM -0500, Tim Peters wrote: > > Basic sanity requires that Python do the same > > thing on *all* case-insensitive case-preserving filesystems, to the fullest > > extent possible. Python's DOS/Windows behavior has priority by a decade. > > I'm deadly opposed to making a special wart for Cygwin (or the Mac), but am > > in favor of changing it on Windows too. > > May be if we can agree on how import should behave, then we will have > a better chance of determining the best way to implement it sans warts? > So, along these lines I propose that import from a file behave the same > on both case-sensitive and case-insensitive/case-preserving filesystems. > This will help to maximize portability between platforms like UNIX, > Windows, and Mac. Unfortunately, something like the PYTHONCASEOK > caveat still needs to be preserved for case-destroying filesystems. > > Any feedback is appreciated -- I'm just trying to help get closure on > this issue by Beta 1. Tim has convinced me that the proper rules are simple: - If PYTHONCASEOK is set, use the first file found with a case-insensitive match. - If PYTHONCASEOK is not set, and the file system is case-preserving, ignore (rather than bail out at) files that don't have the proper case. Tim is in charge of cleaning up the code, but he'll need help for the Cygwin and MacOSX parts. --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Mon Feb 12 22:59:06 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 12 Feb 2001 16:59:06 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102122221.RAA11205@cj20424-a.reston1.va.home.com> References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> Message-ID: <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum writes: [Andrew writes:] >> I'd really like to have unit testing in 2.1 that I can actually >> use. PyUnit as it stands is clunky enough that I'd still use the >> Quixote framework in my code; the advantage of being included >> with Python would not overcome its disadvantages for me. Have >> you got a list of desired changes? And should the changes be >> discussed on python-dev or the PyUnit list? GvR> I'm just reporting what I've heard on our group meetings. Fred GvR> Drake and Jeremy Hylton are in charge of getting this done. GvR> You can catch their ear on python-dev; I'm not sure about the GvR> PyUnit list. I'm happy to discuss on either venue, or to hash it in private email. What specific features do you need? Perhaps Steve will be interested in including them in PyUnit. Jeremy From akuchlin at cnri.reston.va.us Tue Feb 13 00:10:10 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 12 Feb 2001 18:10:10 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Feb 12, 2001 at 04:59:06PM -0500 References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010212181010.A4267@thrak.cnri.reston.va.us> On Mon, Feb 12, 2001 at 04:59:06PM -0500, Jeremy Hylton wrote: >I'm happy to discuss on either venue, or to hash it in private email. >What specific features do you need? Perhaps Steve will be interested >in including them in PyUnit. * Useful shorthands for common asserts (testing that two sequences are the same ignoring order, for example) * A way to write test cases that doesn't bring the test method to a halt if something raises an unexpected exception * Coverage support (though that would also entail Skip's coverage code getting into 2.1) --amk From jeremy at alum.mit.edu Tue Feb 13 00:16:19 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 12 Feb 2001 18:16:19 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010212181010.A4267@thrak.cnri.reston.va.us> References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> Message-ID: <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "AMK" == Andrew Kuchling writes: AMK> On Mon, Feb 12, 2001 at 04:59:06PM -0500, Jeremy Hylton wrote: >> I'm happy to discuss on either venue, or to hash it in private >> email. What specific features do you need? Perhaps Steve will >> be interested in including them in PyUnit. AMK> * Useful shorthands for common asserts (testing that two AMK> sequences are the same ignoring order, for example) We can write a collection of helper functions for this, right? self.verify(sequenceElementsThatSame(l1, l2)) AMK> * A way to write test cases that doesn't bring the test method AMK> to a halt if something raises an unexpected exception I'm not sure how to achieve this or why you would want the test to continue. I know that Quixote uses test cases in strings, but it's the thing I like the least about Quixote unittest. Can you think of an alternate mechanism? Maybe I'd be less opposed if I could understand why it's desirable to continue executing a method where something has already failed unexpectedly. After the first exception, something is broken and needs to be fixed, regardless of whether subsequent lines of code work. AMK> * Coverage support (though that would also entail Skip's AMK> coverage code getting into 2.1) Shouldn't be hard. Skip's coverage code was in 2.0; we might need to move it from Tools/script to the library, though. Jeremy From tim.one at home.com Tue Feb 13 01:14:51 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 12 Feb 2001 19:14:51 -0500 Subject: [Python-Dev] Cool link In-Reply-To: <3A885A4A.E1AB42FF@tismer.com> Message-ID: [Christian Tismer] > ... > What is your message? Do you want me to react? I had no msg other than to share a cool link I thought people here would find interesting. While Greg Wilson, e.g., complained about the C macro implementation of CLU iterators in his review, that's exactly the kind of thing that should be *interesting* to Python-Dev'ers: a long and gentle explanation of an actual implementation. I expect that most people here still have no clear idea how generators (let alone continuations) can be implemented, or why they'd be useful. Here's a function to compute the number of distinct unlabelled binary trees with n nodes (these are the so-called Catalan numbers -- the book didn't mention that): cache = {0: 1} def count(n): val = cache.get(n, 0) if val: return val for leftsize in range(n): val += count(leftsize) * count(n-1 - leftsize) cache[n] = val return val Here's one to generate all the distinct unlabelled binary trees with n nodes: def genbin(n): if n == 0: return [()] result = [] for leftsize in range(n): for left in genbin(leftsize): for right in genbin(n-1 - leftsize): result.append((left, right)) return result For even rather small values of n, genbin(n) builds lists of impractical size. Trying to build a return-one-at-a-time iterator form of genbin() today is surprisingly difficult. In CLU or Icon, you just throw away the "result = []" and "return result" lines, and replace result.append with "suspend" (Icon) or "yield" (CLU). Exactly the same kind of algorithm is needed to generate all ways of parenthesizing an n-term expression. I did that in ABC once, in a successful attempt to prove via exhaustion that raise-complex-to-integer-power in the presence of signed zeroes is ill-defined under IEEE-754 arithmetic rules. While nobody here cares about that, the 754 committee took it seriously indeed. For me, I'm still just trying to get Python to address all the things I found unbearable in ABC <0.7 wink>. so-if-there's-a-msg-here-it's-unique-to-me-ly y'rs - tim From michel at digicool.com Tue Feb 13 03:06:25 2001 From: michel at digicool.com (Michel Pelletier) Date: Mon, 12 Feb 2001 18:06:25 -0800 (PST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010212181010.A4267@thrak.cnri.reston.va.us> Message-ID: On Mon, 12 Feb 2001, Andrew Kuchling wrote: > * A way to write test cases that doesn't bring the test method to a halt if > something raises an unexpected exception I'm not sure what you mean by this, but Jim F. recently sent this email around internally: """ Unit tests are cool. One problem is that after you find a problem, it's hard to debug it, because unittest catches the exceptions. I added debug methods to TestCase and TestSuite so that you can run your tests under a debugger. When you are ready to debug a test failure, just call debug() on your test suite or case under debugger control. I checked this change into our CVS and send the auther of PyUnit a message. Jim """ I don't think it adressed your comment, but it is an interesting related feature. -Michel From tim.one at home.com Tue Feb 13 03:05:51 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 12 Feb 2001 21:05:51 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102122221.RAA11205@cj20424-a.reston1.va.home.com> Message-ID: Note that doctest.py is part of the 2.1 std library. If you've never used it, pretend I didn't tell you that, and look at the new std library module difflib.py. Would you even guess there *are* unit tests in there? Here's the full text of the new std test test_difflib.py: import doctest, difflib doctest.testmod(difflib, verbose=1) I will immodestly claim that if doctest is sufficient for your testing purposes, you're never going to find anything easier or faster or more natural to use (and, yes, if an unexpected exception is raised, it doesn't stop the rest of the tests from running -- it's in the very nature of "unit tests" that an error in one unit should not prevent other unit tests from running). practicing-for-a-marketing-career-ly y'rs - tim From Jason.Tishler at dothill.com Tue Feb 13 04:36:38 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Mon, 12 Feb 2001 22:36:38 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <200102122223.RAA11224@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 12, 2001 at 05:23:21PM -0500 References: <20010205122721.J812@dothill.com> <20010212170839.F281@dothill.com> <200102122223.RAA11224@cj20424-a.reston1.va.home.com> Message-ID: <20010212223638.A228@dothill.com> Tim, On Mon, Feb 12, 2001 at 05:23:21PM -0500, Guido van Rossum wrote: > Tim is in charge of cleaning up the code, but he'll need help for the > Cygwin and MacOSX parts. I'm willing to help develop, test, etc. the Cygwin stuff. Just let me know how I can assist you. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From akuchlin at cnri.reston.va.us Tue Feb 13 04:52:23 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Mon, 12 Feb 2001 22:52:23 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Feb 12, 2001 at 06:16:19PM -0500 References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010212225223.B21640@newcnri.cnri.reston.va.us> On Mon, Feb 12, 2001 at 06:16:19PM -0500, Jeremy Hylton wrote: >We can write a collection of helper functions for this, right? > self.verify(sequenceElementsThatSame(l1, l2)) Pretty much; nothing too difficult. >Maybe I'd be less opposed if I could understand why it's desirable to >continue executing a method where something has already failed >unexpectedly. After the first exception, something is broken and In this style of unit test, you have setup() and shutdown() methods that create and destroy the test objects afresh for each test case, so cases aren't big long skeins of assertions that will all break given a single failure. Instead they're more like 1) call a method that changes an object's state, 2) call accessors or get attributes to check invariants are what you expect. It can be useful to know that .get_parameter_value() raises an exception while .get_parameter_type() doesn't, or whatever. --amk From chrism at digicool.com Tue Feb 13 06:29:01 2001 From: chrism at digicool.com (Chris McDonough) Date: Tue, 13 Feb 2001 00:29:01 -0500 Subject: [Python-Dev] Unit testing (again) References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> <20010212225223.B21640@newcnri.cnri.reston.va.us> Message-ID: <025e01c0957d$e9c66d80$0e01a8c0@kurtz> Andrew, Here's a sample of PyUnit stuff that I think illustrates what you're asking for... from unittest import TestCase, makeSuite, TextTestRunner class Test(TestCase): def setUp(self): self.t = {2:2} def tearDown(self): del self.t def testGetItemFails(self): self.assertRaises(KeyError, self._getitemfail) def _getitemfail(self): return self.t[1] def testGetItemSucceeds(self): assert self.t[2] == 2 def main(): suite = makeSuite(Test, 'test') runner = TextTestRunner() runner.run(suite) if __name__ == '__main__': main() Execution happens like this: call setUp() call testGetItemFails() print test results call tearDown() call setUp() call testGetItemSucceeds() print test results call tearDown() end Isn't this almost exactly what you want? Or am I completely missing the point? ----- Original Message ----- From: "Andrew Kuchling" To: Sent: Monday, February 12, 2001 10:52 PM Subject: Re: [Python-Dev] Unit testing (again) > On Mon, Feb 12, 2001 at 06:16:19PM -0500, Jeremy Hylton wrote: > >We can write a collection of helper functions for this, right? > > self.verify(sequenceElementsThatSame(l1, l2)) > > Pretty much; nothing too difficult. > > >Maybe I'd be less opposed if I could understand why it's desirable to > >continue executing a method where something has already failed > >unexpectedly. After the first exception, something is broken and > > In this style of unit test, you have setup() and shutdown() methods that > create and destroy the test objects afresh for each test case, so cases > aren't big long skeins of assertions that will all break given a single > failure. Instead they're more like 1) call a method that changes an > object's state, 2) call accessors or get attributes to check invariants are > what you expect. It can be useful to know that .get_parameter_value() > raises an exception while .get_parameter_type() doesn't, or whatever. > > --amk > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > From tim.one at home.com Tue Feb 13 06:34:23 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 00:34:23 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: <20010212223638.A228@dothill.com> Message-ID: [Jason Tishler] > I'm willing to help develop, test, etc. the Cygwin stuff. Just let me > know how I can assist you. Jason, doesn't the current CVS Python already do what you want? I thought that was the case, due to the HAVE_DIRENT_H #ifdef'ery Steven introduced. If not, scream at me. My intent is to get rid of the HAVE_DIRENT_H #ifdef *test*, but not the code therein, and add new versions of MatchFilename that work for systems (like regular old Windows) that don't support opendir() natively. I didn't think Cygwin needed that -- scream if that's wrong. However, even if you are happy with that (& I won't hurt it), sooner or later you're going to try accessing a case-destroying network filesystem from Cygwin, so I believe you need more code to honor PYTHONCASEOK too (it's the only chance anyone has in the face of a case-destroying system). Luckily, with a new child in the house, you have plenty of time to think about this, since you won't be sleeping again for about 3 years anyway . From pf at artcom-gmbh.de Tue Feb 13 08:17:03 2001 From: pf at artcom-gmbh.de (Peter Funk) Date: Tue, 13 Feb 2001 08:17:03 +0100 (MET) Subject: doctest and Python 2.1 (was RE: [Python-Dev] Unit testing (again)) In-Reply-To: from Tim Peters at "Feb 12, 2001 9: 5:51 pm" Message-ID: Hi, Tim Peters: > Note that doctest.py is part of the 2.1 std library. If you've never used [...] > I will immodestly claim that if doctest is sufficient for your testing > purposes, you're never going to find anything easier or faster or more > natural to use (and, yes, if an unexpected exception is raised, it doesn't > stop the rest of the tests from running -- it's in the very nature of "unit > tests" that an error in one unit should not prevent other unit tests from > running). > > practicing-for-a-marketing-career-ly y'rs - tim [a satisfied customer reports:] I like doctest very much. I'm using it for our company projects a lot. This is a very valuable tool. However Pings latest changes, which turned 'foobar\012' into 'foobar\n' and '\377\376\345' into '\xff\xfe\xe5' has broken some of the doctests in our software. Since we have to keep our code compatible with Python 1.5.2 for at least one, two or may be three more years, it isn't obvious to me how to fix this. I've spend some thoughts about a patch to doctest fooling the string printing output back to the 1.5.2 behaviour, but didn't get around to it until now. :-( Regards, Peter -- Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260 office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen) From fredrik at effbot.org Tue Feb 13 09:17:58 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 13 Feb 2001 09:17:58 +0100 Subject: [Python-Dev] Unit testing (again) References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com><20010212171800.D3900@thrak.cnri.reston.va.us><200102122221.RAA11205@cj20424-a.reston1.va.home.com><14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net><20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <01c201c09595$7bc09be0$e46940d5@hagrid> Jeremy wrote: > I know that Quixote uses test cases in strings, but it's the thing I > like the least about Quixote unittest like whitespace indentation, it's done that way for a reason. > I'm not sure how to achieve this or why you would want the test to > continue. same reason you want your compiler to report more than just the first error -- so you can see patterns in the test script's behaviour, so you can fix more than one bug at a time, or fix the bugs in an order that suits you and not the framework, etc. (for some of our components, we're using a framework that can continue to run the test even if the tested program dumps core. trust me, that has saved us a lot of time...) > After the first exception, something is broken and needs to be > fixed, regardless of whether subsequent lines of code work. jeremy, that's the kind of comment I would have expected from a manager, not from a programmer who has done lots of testing. Cheers /F From stephen_purcell at yahoo.com Tue Feb 13 09:26:17 2001 From: stephen_purcell at yahoo.com (Steve Purcell) Date: Tue, 13 Feb 2001 09:26:17 +0100 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Feb 12, 2001 at 04:59:06PM -0500 References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010213092617.B5558@freedom.puma-ag.com> Jeremy Hylton wrote: > >>>>> "GvR" == Guido van Rossum writes: > > [Andrew writes:] > >> I'd really like to have unit testing in 2.1 that I can actually > >> use. PyUnit as it stands is clunky enough that I'd still use the > >> Quixote framework in my code; the advantage of being included > >> with Python would not overcome its disadvantages for me. Have > >> you got a list of desired changes? And should the changes be > >> discussed on python-dev or the PyUnit list? > > GvR> I'm just reporting what I've heard on our group meetings. Fred > GvR> Drake and Jeremy Hylton are in charge of getting this done. > GvR> You can catch their ear on python-dev; I'm not sure about the > GvR> PyUnit list. > > I'm happy to discuss on either venue, or to hash it in private email. > What specific features do you need? Perhaps Steve will be interested > in including them in PyUnit. Fine by private e-mail, though it would be nice if some of the discussions are seen by the PyUnit list because it's a representative community of regular users who probably have a good idea of what makes sense for them. If somebody would like to suggest changes, I can look into how they might get done. Also, I'd love to see what I can do to allay AMK's 'clunkiness' complaints! :-) Best wishes, -Steve -- Steve Purcell, Pythangelist "Life must be simple if *I* can do it" -- me From fredrik at effbot.org Tue Feb 13 10:35:30 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 13 Feb 2001 10:35:30 +0100 Subject: [Python-Dev] Unit testing (again) References: Message-ID: <002301c095a0$4fe5cc60$e46940d5@hagrid> tim wrote: > I will immodestly claim that if doctest is sufficient for your testing > purposes, you're never going to find anything easier or faster or more > natural to use you know, just having taken another look at doctest and the unit test options, I'm tempted to agree. except for the "if sufficient" part, that is -- given that you can easily run doctest on a test harness instead of the original module, it's *always* sufficient. (cannot allow tim to be 100% correct every time ;-) Cheers /F From guido at digicool.com Tue Feb 13 14:55:29 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 13 Feb 2001 08:55:29 -0500 Subject: doctest and Python 2.1 (was RE: [Python-Dev] Unit testing (again)) In-Reply-To: Your message of "Tue, 13 Feb 2001 08:17:03 +0100." References: Message-ID: <200102131355.IAA14403@cj20424-a.reston1.va.home.com> > [a satisfied customer reports:] > I like doctest very much. I'm using it for our company projects a lot. > This is a very valuable tool. > > However Pings latest changes, which turned 'foobar\012' into 'foobar\n' > and '\377\376\345' into '\xff\xfe\xe5' has broken some of the doctests > in our software. > > Since we have to keep our code compatible with Python 1.5.2 for at > least one, two or may be three more years, it isn't obvious to me > how to fix this. This is a general problem with doctest, and a general solution exists. It's the same when you have a function that returns a dictionary: you can't include the dictionary in the output, because the key order isn't guaranteed. So, instead of writing your example like this: >>> foo() {"Hermione": "hippogryph", "Harry": "broomstick"} >>> you write it like this: >>> foo() == {"Hermione": "hippogryph", "Harry": "broomstick"} 1 >>> I'll leave it as an exercise to the reader to apply this to string literals. --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Tue Feb 13 04:15:30 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 12 Feb 2001 22:15:30 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <01c201c09595$7bc09be0$e46940d5@hagrid> References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> <01c201c09595$7bc09be0$e46940d5@hagrid> Message-ID: <14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "FL" == Fredrik Lundh writes: FL> Jeremy wrote: >> I know that Quixote uses test cases in strings, but it's the >> thing I like the least about Quixote unittest FL> like whitespace indentation, it's done that way for a reason. Whitespace indentation is natural and makes code easier to read. Putting little snippets of Python code in string literals passed to exec has the opposite effect. doctest is a nice middle ground, because the code snippets are in a natural setting -- an interactive interpreter setting. >> I'm not sure how to achieve this or why you would want the test >> to continue. FL> same reason you want your compiler to report more than just the FL> first error -- so you can see patterns in the test script's FL> behaviour, so you can fix more than one bug at a time, or fix FL> the bugs in an order that suits you and not the framework, etc. Python's compiler only reports one syntax error for a source file, regardless of how many it finds <0.5 wink>. >> After the first exception, something is broken and needs to be >> fixed, regardless of whether subsequent lines of code work. FL> jeremy, that's the kind of comment I would have expected from a FL> manager, not from a programmer who has done lots of testing. I don't think there's any reason to be snide. The question is one of granularity: At what level of granularity should the test framework catch exceptions and continue? I'm satisfied with the unit of testing being a method. Jeremy From Jason.Tishler at dothill.com Tue Feb 13 15:51:40 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Tue, 13 Feb 2001 09:51:40 -0500 Subject: [Python-Dev] Case sensitive import. In-Reply-To: ; from tim.one@home.com on Tue, Feb 13, 2001 at 12:34:23AM -0500 References: <20010212223638.A228@dothill.com> Message-ID: <20010213095140.A306@dothill.com> Tim, On Tue, Feb 13, 2001 at 12:34:23AM -0500, Tim Peters wrote: > [Jason Tishler] > > I'm willing to help develop, test, etc. the Cygwin stuff. Just let me > > know how I can assist you. Guido said that you needed help with Cygwin and MacOSX, so I was just offering my help. I know that you have the "development" in good shape so let me know if I can help with testing Cygwin or other platforms. > Jason, doesn't the current CVS Python already do what you want? Yes. > I thought > that was the case, due to the HAVE_DIRENT_H #ifdef'ery Steven introduced. > If not, scream at me. My intent is to get rid of the HAVE_DIRENT_H #ifdef > *test*, but not the code therein, and add new versions of MatchFilename that > work for systems (like regular old Windows) that don't support opendir() > natively. I didn't think Cygwin needed that -- scream if that's wrong. You are correct -- Cygwin supports opendir() et al. > However, even if you are happy with that (& I won't hurt it), I am (and thanks). > sooner or > later you're going to try accessing a case-destroying network filesystem > from Cygwin, so I believe you need more code to honor PYTHONCASEOK too (it's > the only chance anyone has in the face of a case-destroying system). Is it possible to make the PYTHONCASEOK caveat orthogonal to the platform so it can be enabled to combat case-destroying filesystems for any platform? > Luckily, with a new child in the house, you have plenty of time to think > about this, since you won't be sleeping again for about 3 years anyway > . Agreed -- this is not our first so we "know." I *have* been thinking about this issue and others 24 hours a day for the last two weeks. I'm just finding it difficult to actually *do* anything with one hand and no sleep... :,) Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From barry at digicool.com Tue Feb 13 16:00:19 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 13 Feb 2001 10:00:19 -0500 Subject: [Python-Dev] Unit testing (again) References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> <01c201c09595$7bc09be0$e46940d5@hagrid> <14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14985.19459.571737.979488@anthem.wooz.org> >>>>> "JH" == Jeremy Hylton writes: JH> Whitespace indentation is natural and makes code easier to JH> read. Putting little snippets of Python code in string JH> literals passed to exec has the opposite effect. Especially because requiring the snippets to be in strings means editing them with a Python-aware editor becomes harder. JH> doctest is a nice middle ground, because the code snippets are JH> in a natural setting -- an interactive interpreter setting. And at least there, I can for the most part just cut-and-paste the output of my interpreter session into the docstrings. -Barry From fredrik at pythonware.com Tue Feb 13 17:32:00 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Tue, 13 Feb 2001 17:32:00 +0100 Subject: [Python-Dev] Unit testing (again) References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com><20010212171800.D3900@thrak.cnri.reston.va.us><200102122221.RAA11205@cj20424-a.reston1.va.home.com><14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net><20010212181010.A4267@thrak.cnri.reston.va.us><14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net><01c201c09595$7bc09be0$e46940d5@hagrid><14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> <14985.19459.571737.979488@anthem.wooz.org> Message-ID: <014801c095da$80577bc0$e46940d5@hagrid> barry wrote: > Especially because requiring the snippets to be in strings means > editing them with a Python-aware editor becomes harder. well, you don't have to put *all* your test code inside the test calls... try using them as asserts instead: do something do some calculations do some more calculations self.test_bool("result == 10") > And at least there, I can for the most part just cut-and-paste the > output of my interpreter session into the docstrings. cutting and pasting from the interpreter into the test assertion works just fine... Cheers /F From fredrik at pythonware.com Tue Feb 13 17:58:14 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Tue, 13 Feb 2001 17:58:14 +0100 Subject: [Python-Dev] Unit testing (again) References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com><20010212171800.D3900@thrak.cnri.reston.va.us><200102122221.RAA11205@cj20424-a.reston1.va.home.com><14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net><20010212181010.A4267@thrak.cnri.reston.va.us><14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net><01c201c09595$7bc09be0$e46940d5@hagrid> <14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <016401c095de$28dca100$e46940d5@hagrid> jeremy wrote: > FL> like whitespace indentation, it's done that way for a reason. > > Whitespace indentation is natural and makes code easier to read. > Putting little snippets of Python code in string literals passed to > exec has the opposite effect. Only if you're using large snippets. ...just like whitespace indentation makes things harder it you're mixing tabs and spaces, or prints a file with the wrong tab setting, or cuts and pastes code between editors with different tab settings. In both cases, the solution is simply "don't do that" > doctest is a nice middle ground, because the code snippets are in a > natural setting -- an interactive interpreter setting. They're still inside a string... > Python's compiler only reports one syntax error for a source file, > regardless of how many it finds <0.5 wink>. Sure, but is that because user testing has shown that Python programmers (unlike e.g. C programmers) prefer to see only one bug at a time, or because it's convenient to use exceptions also for syntax errors? Would a syntax-checking editor be better if it only showed one syntax error, even if it found them all? > > After the first exception, something is broken and needs to be > > fixed, regardless of whether subsequent lines of code work. > > FL> jeremy, that's the kind of comment I would have expected from a > FL> manager, not from a programmer who has done lots of testing. > > I don't think there's any reason to be snide. Well, I first wrote "taken out of context, that's the kind of comment" but then I noticed that it wasn't really taken out of context. And in full context, it still looks a bit arrogant: why would Andrew raise this issue if *he* didn't want more granularity? ::: But having looked everything over one more time, and having ported a small test suite to doctest.py, I'm now -0 on adding more test frame- works to 2.1. If it's good enough for tim... (and -1 if adding more frameworks means that I have to use them ;-). Cheers /F From jeremy at alum.mit.edu Tue Feb 13 06:29:35 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 13 Feb 2001 00:29:35 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <016401c095de$28dca100$e46940d5@hagrid> References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> <01c201c09595$7bc09be0$e46940d5@hagrid> <14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> <016401c095de$28dca100$e46940d5@hagrid> Message-ID: <14984.50751.27663.64349@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "FL" == Fredrik Lundh writes: >> > After the first exception, something is broken and needs to be >> > fixed, regardless of whether subsequent lines of code work. >> FL> jeremy, that's the kind of comment I would have expected from a FL> manager, not from a programmer who has done lots of testing. >> >> I don't think there's any reason to be snide. FL> Well, I first wrote "taken out of context, that's the kind of FL> comment" but then I noticed that it wasn't really taken out of FL> context. FL> And in full context, it still looks a bit arrogant: why would FL> Andrew raise this issue if *he* didn't want more granularity? I hope it's simple disagreement and not arrogance. I do not agree with him (or you) on a particular technical issue -- whether particular expressions should be stuffed into string literals in order to recover from exceptions they raise. There's a tradeoff between readability and granularity and I prefer readability. I am surprised that you are impugning my technical abilities (manager, not programmer) or calling me arrogant because I don't agree. I think I am should be entitled to my wrong opinion. Jeremy From akuchlin at cnri.reston.va.us Tue Feb 13 18:29:35 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Tue, 13 Feb 2001 12:29:35 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14984.50751.27663.64349@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Tue, Feb 13, 2001 at 12:29:35AM -0500 References: <200102122137.QAA09818@cj20424-a.reston1.va.home.com> <20010212171800.D3900@thrak.cnri.reston.va.us> <200102122221.RAA11205@cj20424-a.reston1.va.home.com> <14984.23722.944808.609780@w221.z064000254.bwi-md.dsl.cnc.net> <20010212181010.A4267@thrak.cnri.reston.va.us> <14984.28355.75830.330790@w221.z064000254.bwi-md.dsl.cnc.net> <01c201c09595$7bc09be0$e46940d5@hagrid> <14984.42706.688272.22773@w221.z064000254.bwi-md.dsl.cnc.net> <016401c095de$28dca100$e46940d5@hagrid> <14984.50751.27663.64349@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010213122935.G4334@thrak.cnri.reston.va.us> On Tue, Feb 13, 2001 at 12:29:35AM -0500, Jeremy Hylton wrote: >I hope it's simple disagreement and not arrogance. I do not agree I trust not. :) My primary concern is that the tests are quickly readable, because they're also a form of documentation (hopefully not the only one though). I have enough problems debugging the actual code without having to debug a test suite. Consider the example Chris posted, which features the snippet: def testGetItemFails(self): self.assertRaises(KeyError, self._getitemfail) def _getitemfail(self): return self.t[1] I don't think this form, requiring an additional small helper method, is any clearer than self.test_exc('self.t[1]', KeyError); two extra lines and the loss of locality. Put tests for 3 or 4 different exceptions into testGetItemFails and you'd have several helper functions to trace through. For simple value tests, this is less important; the difference between test_val( 'self.db.get_user("FOO")', None ) and test_val(None, self.db.get_user, "foo") is less. /F's observation that doctest seems suitable for his work is interesting and surprising; I'll spend some more time looking at it. --amk From tommy at ilm.com Tue Feb 13 18:59:32 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Tue, 13 Feb 2001 09:59:32 -0800 (PST) Subject: [Python-Dev] troubling math bug under IRIX 6.5 Message-ID: <14985.29880.710719.533126@mace.lucasdigital.com> Hey Folks, One of these days I'll figure that SOurceForge stuff out so I can submit a real bug report, but this one is freaky enough that I thought I'd just send it right out... from the latest CVS (as of 9:30am pacific) I run 'make test' and get: ... PYTHONPATH= ./python -tt ./Lib/test/regrtest.py -l make: *** [test] Bus error (core dumped) a quick search around shows that just importing regrtest.py seg faults, and further simply importing random.py seg faults (which regrtest.py does). it all boils down to this line in random.py NV_MAGICCONST = 4 * _exp(-0.5)/_sqrt(2.0) and the problem can be further reduced thusly: >>> import math >>> 4 * math.exp(-0.5) Bus error (core dumped) but it isn't the math.exp that's the problem, its multiplying the result times 4! (tommy at mace)/u0/tommy/pycvs/python/dist/src$ ./python Python 2.1a2 (#2, Feb 13 2001, 09:49:17) [C] on irix6 Type "copyright", "credits" or "license" for more information. >>> import math >>> math.exp(1) 2.7182818284590451 >>> math.exp(2) 7.3890560989306504 >>> math.exp(-1) 0.36787944117144233 >>> math.exp(-.5) 0.60653065971263342 >>> math.exp(-0.5) 0.60653065971263342 >>> 4 * math.exp(-0.5) Bus error (core dumped) is it just me? any guesses what might be the cause of this? From tim.one at home.com Tue Feb 13 20:47:54 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 14:47:54 -0500 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: <14985.29880.710719.533126@mace.lucasdigital.com> Message-ID: [Flying Cougar Burnette] > ... > >>> 4 * math.exp(-0.5) > Bus error (core dumped) Now let's look at the important part: > (tommy at mace)/u0/tommy/pycvs/python/dist/src$ ./python > Python 2.1a2 (#2, Feb 13 2001, 09:49:17) [C] on irix6 ^^^^^ The first thing to try on any SGI box is to recompile Python with optimization turned off. After that confirms it's the compiler's fault, we can try to figure out where it's screwing up. Do either of these blow up too? >>> 4 * 0.60653065971263342 >>> 4.0 * math.exp(-0.5) Reason for asking: "NV_MAGICCONST = 4 * _exp(-0.5)/_sqrt(2.0)" is the first time random.py does either of a floating multiply or an int-to-float conversion. > is it just me? Yup. So long as you use SGI software, it always will be . and-i-say-that-as-an-sgi-shareholder-ly y'rs - tim From tommy at ilm.com Tue Feb 13 21:04:28 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Tue, 13 Feb 2001 12:04:28 -0800 (PST) Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: References: <14985.29880.710719.533126@mace.lucasdigital.com> Message-ID: <14985.37461.962243.777743@mace.lucasdigital.com> Tim Peters writes: | [Flying Cougar Burnette] | > ... | > >>> 4 * math.exp(-0.5) | > Bus error (core dumped) | | Now let's look at the important part: | | > (tommy at mace)/u0/tommy/pycvs/python/dist/src$ ./python | > Python 2.1a2 (#2, Feb 13 2001, 09:49:17) [C] on irix6 | ^^^^^ figgered as much... | | The first thing to try on any SGI box is to recompile Python with | optimization turned off. After that confirms it's the compiler's fault, we | can try to figure out where it's screwing up. Do either of these blow up | too? | | >>> 4 * 0.60653065971263342 | >>> 4.0 * math.exp(-0.5) yup. | | > is it just me? | | Yup. So long as you use SGI software, it always will be . | | and-i-say-that-as-an-sgi-shareholder-ly y'rs - tim one these days sgi... Pow! Right to the Moon! ;) Okay, I recompiled after blanking OPT= in Makefile and things now work. This is where I start swearing "But, this has never happened to me before!" and the kind, gentle response is "Don't worry, it happens to lots of guys..." ;) And the next step is... ? From tim.one at home.com Tue Feb 13 21:51:35 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 15:51:35 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <016401c095de$28dca100$e46940d5@hagrid> Message-ID: [/F] > But having looked everything over one more time, and having ported > a small test suite to doctest.py, I'm now -0 on adding more test > frameworks to 2.1. If it's good enough for tim... I'm not sure that it is, but I have yet to make time to look at the others. It's no secret that I love doctest, and, indeed, in 20+ years of industry pain, it's the only testing approach I didn't drop ASAP. I still use it for all my stuff, and very happily. But! I don't do anything with the web or GUIs etc -- I'm an algorithms guy. Most of the stuff I work with has clearly defined input->output relationships, and capturing an interactive session is simply perfect both for documenting and testing such stuff. It's also the case that I weight the "doc" part of "doctest" more heavily than the "test" part, and when Peter or Guido say that, e.g., the reliance on exact output match is "a problem", I couldn't disagree more strongly. It's obvious to Guido that dict output may come in any order, but a doc *reader* in a hurry is at best uneasy when documented output doesn't match actual output exactly. That's not something I'll yield on. [Andrew] > def testGetItemFails(self): > self.assertRaises(KeyError, self._getitemfail) > > def _getitemfail(self): > return self.t[1] > > [vs] > > self.test_exc('self.t[1]', KeyError) My brain doesn't grasp either of those at first glance. But everyone who has used Python a week grasps this: class C: def __getitem__(self, i): """Return the i'th item. i==1 raises KeyError. For example, >>> c = C() >>> c[0] 0 >>> c[1] Traceback (most recent call last): File "x.py", line 20, in ? c[1] File "x.py", line 14, in __getitem__ raise KeyError("bad i: " + `i`) KeyError: bad i: 1 >>> c[-1] -1 """ if i != 1: return i else: raise KeyError("bad i: " + `i`) Cute: Python changed the first line of its traceback output (used to say "Traceback (innermost last):"), and current doctest wasn't expecting that. For *doc* purposes, it's important that the examples match what Python actually does, so that's a bug in doctest. From tim.one at home.com Tue Feb 13 22:04:29 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 16:04:29 -0500 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: <14985.37461.962243.777743@mace.lucasdigital.com> Message-ID: [Tommy turns off optimization, and all is well] >> Do either of these blow up too? >> >> >>> 4 * 0.60653065971263342 >> >>> 4.0 * math.exp(-0.5) > yup. OK. Does the first one blow up? Does the second one blow up? Or do both blow up? Fourth question: does >> 4.0 * 0.60653065971263342 blow up? > ... > And the next step is... ? Stop making me pull your teeth . I'm trying to narrow down where it's screwing up. At worst, then, you can disable optimization only for that particular file, and create a tiny bug case to send off to SGI World Headquarters so they fix this someday. At best, perhaps a tiny bit of code rearrangement will unstick your compiler (I'm good at guessing what might work in that respect, but need to narrow it down to a single function within Python first), and I can check that in for 2.1. From fredrik at effbot.org Tue Feb 13 22:33:20 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 13 Feb 2001 22:33:20 +0100 Subject: [Python-Dev] Unit testing (again) References: Message-ID: <003d01c09604$a0f15520$e46940d5@hagrid> > Cute: Python changed the first line of its traceback output (used to say > "Traceback (innermost last):"), and current doctest wasn't expecting that. which reminds me... are there any chance of getting a doctest that can survives its own test suite under 1.5.2, 2.0, and 2.1? the latest version blows up under 1.5.2 and 2.0: ***************************************************************** Failure in example: 1/0 from line #155 of doctest Expected: ZeroDivisionError: integer division or modulo by zero Got: ZeroDivisionError: integer division or modulo 1 items had failures: 1 of 8 in doctest ***Test Failed*** 1 failures. Cheers /F From mal at lemburg.com Tue Feb 13 22:33:21 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 13 Feb 2001 22:33:21 +0100 Subject: [Python-Dev] Unit testing (again) References: <003d01c09604$a0f15520$e46940d5@hagrid> Message-ID: <3A89A821.6EFC6AC9@lemburg.com> Fredrik Lundh wrote: > > > Cute: Python changed the first line of its traceback output (used to say > > "Traceback (innermost last):"), and current doctest wasn't expecting that. > > which reminds me... are there any chance of getting a doctest > that can survives its own test suite under 1.5.2, 2.0, and 2.1? > > the latest version blows up under 1.5.2 and 2.0: > > ***************************************************************** > Failure in example: 1/0 > from line #155 of doctest > Expected: ZeroDivisionError: integer division or modulo by zero > Got: ZeroDivisionError: integer division or modulo > 1 items had failures: > 1 of 8 in doctest > ***Test Failed*** 1 failures. Since exception message are not defined anywhere I'd suggest to simply ignore them in the output. About the traceback output format: how about adding some re support instead of using string.find() ?! -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From michel at digicool.com Tue Feb 13 23:39:52 2001 From: michel at digicool.com (Michel Pelletier) Date: Tue, 13 Feb 2001 14:39:52 -0800 (PST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010213122935.G4334@thrak.cnri.reston.va.us> Message-ID: On Tue, 13 Feb 2001, Andrew Kuchling wrote: > Consider the example Chris posted, which features the snippet: > > def testGetItemFails(self): > self.assertRaises(KeyError, self._getitemfail) > > def _getitemfail(self): > return self.t[1] > > I don't think this form, requiring an additional small helper method, > is any clearer than self.test_exc('self.t[1]', KeyError); two extra > lines and the loss of locality. Put tests for 3 or 4 different > exceptions into testGetItemFails and you'd have several helper > functions to trace through. I'm not sure what the purpose of using a unit test to test a different unit in the same suite is. I've never used assertRaises in this way, and the small helper method seems just to illustrate your point, not an often used feature of asserting an Exception condition. More often the method you are checking for an exception comes from the thing you are testing, not the test. Maybe you have different usage patterns than I... -Michel From tim.one at home.com Tue Feb 13 22:39:08 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 16:39:08 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <003d01c09604$a0f15520$e46940d5@hagrid> Message-ID: [/F] > which reminds me... are there any chance of getting a doctest > that can survives its own test suite under 1.5.2, 2.0, and 2.1? > > the latest version blows up under 1.5.2 and 2.0: > > ***************************************************************** > Failure in example: 1/0 > from line #155 of doctest > Expected: ZeroDivisionError: integer division or modulo by zero > Got: ZeroDivisionError: integer division or modulo > 1 items had failures: > 1 of 8 in doctest > ***Test Failed*** 1 failures. Not to my mind. doctest is intentionally picky about exact matches, for reasons explained earlier. If the docs for a thing say "integer division or modulo by zero" is expected, but running it says something else, the docs are wrong and doctest's primary *purpose* is to point that out loudly. I could change the exception example to something where Python didn't gratuitously change what it prints, though . OK, I'll do that. From tim.one at home.com Tue Feb 13 22:42:19 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 16:42:19 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <3A89A821.6EFC6AC9@lemburg.com> Message-ID: [MAL] > Since exception message are not defined anywhere I'd suggest > to simply ignore them in the output. Virtually nothing about Python's output is clearly defined, and for doc purposes I want to capture what Python actually does. > About the traceback output format: how about adding some > re support instead of using string.find() ?! Why? I never use regexps where simple string matches work, and neither should you . From guido at digicool.com Tue Feb 13 22:45:56 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 13 Feb 2001 16:45:56 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: Your message of "Tue, 13 Feb 2001 16:39:08 EST." References: Message-ID: <200102132145.QAA18076@cj20424-a.reston1.va.home.com> > Not to my mind. doctest is intentionally picky about exact matches, for > reasons explained earlier. If the docs for a thing say "integer division or > modulo by zero" is expected, but running it says something else, the docs > are wrong and doctest's primary *purpose* is to point that out loudly. Of course, this is means that *if* you use doctest, all authoritative docs should be in the docstring, and not elsewhere. Which brings us back to the eternal question of how to indicate mark-up in docstrings. Is everything connected to everything? --Guido van Rossum (home page: http://www.python.org/~guido/) From michel at digicool.com Tue Feb 13 23:54:58 2001 From: michel at digicool.com (Michel Pelletier) Date: Tue, 13 Feb 2001 14:54:58 -0800 (PST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <002301c095a0$4fe5cc60$e46940d5@hagrid> Message-ID: On Tue, 13 Feb 2001, Fredrik Lundh wrote: > tim wrote: > > I will immodestly claim that if doctest is sufficient for your testing > > purposes, you're never going to find anything easier or faster or more > > natural to use > > you know, just having taken another look at doctest > and the unit test options, I'm tempted to agree. I also agree that doctest is the bee's knees, but I don't think it is quite as useful for us as PyUnit (for other people, I'm sure it's very useful). One of the goals of our interface work is to associate unit tests with interfaces. I don't see how doctest can work well with that. I probably need to look at it more, but one of our end goals is to walk up to a component, push a button, and have that components interfaces test the component while the system is live. I immagine this involving a bit of external framework at the interface level that would be pretty easy with PyUnit, I've only seen one example of doctest and it looks like you run it against an imported module. I don't see how this helps us with our (DC's) definition of components. A personal issue for me is that it overloads the docstring, no biggy, but it's just a personal nit I don't particularly like about doctest. Another issue is documentation. I don't know how much documentation doctest has, but PyUnit's documentation is *superb* and there are no suprises, which is absolutely +1. Quixote's documentation seems very thin (please correct me if I'm wrong). PyUnit's documentation goes beyond just explaning the software into explaining common patterns and unit testing philosophies. -Michel From tim.one at home.com Tue Feb 13 23:13:24 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 17:13:24 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: Message-ID: [Michel Pelletier] > ... > A personal issue for me is that it overloads the docstring, no > biggy, but it's just a personal nit I don't particularly like about > doctest. No. The docstring remains documentation. But documentation without examples generally sucks, due to the limitations of English in being precise. A sharp example can be worth 1,000 words. doctest is being used as *intended* to the extent that the embedded examples are helpful for documentation purposes. doctest then guarantees the examples continue to work exactly as advertised over time (and they don't! examples *always* get out of date, but without (something like) doctest they never get repaired). As I suggested at the start, read the docstrings for difflib.py: the examples are an integral part of the docs, and you shouldn't get any sense that they're there "just for testing" (if you do, the examples are poorly chosen, or poorly motivated in the surrounding commentary). Beyond that, doctest will also execute any code it finds in the module.__test__ dict, which maps arbitrary names to arbitrary strings. Anyone using doctest primarily as a testing framework should stuff their test strings into __test__ and leave the docstrings alone. > Another issue is documentation. I don't know how much documentation > doctest has, Look at its docstrings -- they not only explain it in detail, but contain examples of use that doctest can check . From fredrik at effbot.org Tue Feb 13 23:22:50 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Tue, 13 Feb 2001 23:22:50 +0100 Subject: [Python-Dev] Unit testing (again) References: Message-ID: <008101c0960b$818e09b0$e46940d5@hagrid> michel wrote: > One of the goals of our interface work is to associate unit tests with > interfaces. I don't see how doctest can work well with that. I probably > need to look at it more, but one of our end goals is to walk up to a > component, push a button, and have that components interfaces test the > component while the system is live. My revised approach to unit testing is to use doctest to test the test harness, not the module itself. To handle your case, design the test to access the component via a module global, let the "onclick" code set up that global, and run the test script under doctest. (I did that earlier today, and it sure worked just fine) > Another issue is documentation. I don't know how much documentation > doctest has, but PyUnit's documentation is *superb* and there are no > suprises, which is absolutely +1. No surprises? I don't know -- my brain kind of switched off when I came to the "passing method names as strings to the constructor" part. Now, how Pythonic is that on a scale? On the other hand, I also suffer massive confusion whenever I try to read Zope docs, so it's probably just different mind- sets ;-) Cheers /F From tommy at ilm.com Tue Feb 13 23:25:13 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Tue, 13 Feb 2001 14:25:13 -0800 (PST) Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: References: <14985.37461.962243.777743@mace.lucasdigital.com> Message-ID: <14985.46047.226447.573927@mace.lucasdigital.com> sorry- BOTH blew up until I turned off optimization. now neither does. shall I turn opts back on and try a few more cases? Tim Peters writes: | [Tommy turns off optimization, and all is well] | | >> Do either of these blow up too? | >> | >> >>> 4 * 0.60653065971263342 | >> >>> 4.0 * math.exp(-0.5) | | > yup. | | OK. Does the first one blow up? Does the second one blow up? Or do both | blow up? | | Fourth question: does | | >> 4.0 * 0.60653065971263342 | | blow up? | | > ... | > And the next step is... ? | | Stop making me pull your teeth . I'm trying to narrow down where it's | screwing up. At worst, then, you can disable optimization only for that | particular file, and create a tiny bug case to send off to SGI World | Headquarters so they fix this someday. At best, perhaps a tiny bit of code | rearrangement will unstick your compiler (I'm good at guessing what might | work in that respect, but need to narrow it down to a single function within | Python first), and I can check that in for 2.1. From sdm7g at virginia.edu Tue Feb 13 23:35:24 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Tue, 13 Feb 2001 17:35:24 -0500 (EST) Subject: [Python-Dev] Case sensitive import. In-Reply-To: <200102122223.RAA11224@cj20424-a.reston1.va.home.com> Message-ID: On Mon, 12 Feb 2001, Guido van Rossum wrote: > Tim has convinced me that the proper rules are simple: > > - If PYTHONCASEOK is set, use the first file found with a > case-insensitive match. > > - If PYTHONCASEOK is not set, and the file system is case-preserving, > ignore (rather than bail out at) files that don't have the proper > case. > > Tim is in charge of cleaning up the code, but he'll need help for the > Cygwin and MacOSX parts. > Thanks Tim (for convincing Guido). Now I don't have to post (and you don't have to read!) my 2K word essay on why Guido's old rules were inconsistent and might have caused TEOTWAWKI if fed into the master computer. Just let me know if you need me to test anything on macosx. -- Steve M. From mal at lemburg.com Tue Feb 13 23:37:13 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Tue, 13 Feb 2001 23:37:13 +0100 Subject: [Python-Dev] Unit testing (again) References: Message-ID: <3A89B719.9CDB68B@lemburg.com> Tim Peters wrote: > > [MAL] > > Since exception message are not defined anywhere I'd suggest > > to simply ignore them in the output. > > Virtually nothing about Python's output is clearly defined, and for doc > purposes I want to capture what Python actually does. But what it does write to the console changes with every release (e.g. just take the repr() changes for strings with non-ASCII data)... this simply breaks you test suite every time Writing Python programs which work on Python 1.5-2.1 which at the same time pass the doctest unit tests becomes impossible. The regression suite (and most other Python software) catches exceptions based on the exception class -- why isn't this enough for your doctest.py checks ? nit-pickling-ly, -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From jeremy at alum.mit.edu Tue Feb 13 11:47:01 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 13 Feb 2001 05:47:01 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <008101c0960b$818e09b0$e46940d5@hagrid> References: <008101c0960b$818e09b0$e46940d5@hagrid> Message-ID: <14985.4261.562851.935532@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "FL" == Fredrik Lundh writes: >> Another issue is documentation. I don't know how much >> documentation doctest has, but PyUnit's documentation is *superb* >> and there are no suprises, which is absolutely +1. FL> No surprises? I don't know -- my brain kind of switched off FL> when I came to the "passing method names as strings to the FL> constructor" part. Now, how Pythonic is that on a scale? I think this is one of the issues where there is widespread argeement that a feature is needed. The constructor should assume, in the absence of some other instruction, that any method name that starts with 'test' should be considered a test method. That's about as Pythonic as it gets. Jeremy From guido at digicool.com Wed Feb 14 00:13:48 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 13 Feb 2001 18:13:48 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: Your message of "Tue, 13 Feb 2001 17:13:24 EST." References: Message-ID: <200102132313.SAA18504@cj20424-a.reston1.va.home.com> > No. The docstring remains documentation. But documentation without > examples generally sucks, due to the limitations of English in being > precise. A sharp example can be worth 1,000 words. doctest is being used > as *intended* to the extent that the embedded examples are helpful for > documentation purposes. doctest then guarantees the examples continue to > work exactly as advertised over time (and they don't! examples *always* get > out of date, but without (something like) doctest they never get repaired). You're lucky that doctest doesn't return dictionaries! For functions that return dictionaries, it's much more natural *for documentation purposes* to write >>> book() {'Fred': 'mom', 'Ron': 'Snape'} than the necessary work-around. You may deny that's a problem, but once we've explained dictionaries to our users, we can expect them to understand that if they see instead >>> book() {'Ron': 'Snape', 'Fred': 'mom'} they will understand that that's the same thing. Writing it this way is easier to read than >>> book() == {'Ron': 'Snape', 'Fred': 'mom'} 1 I always have to look twice when I see something like that. > As I suggested at the start, read the docstrings for difflib.py: the > examples are an integral part of the docs, and you shouldn't get any sense > that they're there "just for testing" (if you do, the examples are poorly > chosen, or poorly motivated in the surrounding commentary). They are also more voluminous than I'd like the docs for difflib to be... --Guido van Rossum (home page: http://www.python.org/~guido/) From ping at lfw.org Wed Feb 14 00:11:10 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 13 Feb 2001 15:11:10 -0800 (PST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: Message-ID: In support of the argument that bad literals should raise ValueError (or a derived exception) rather than SyntaxError, Guido once said: > "Problems with literal interpretations > traditionally raise 'runtime' exceptions rather than syntax errors." This is currently true of overflowing integers and string literals, and hence it has also been so implemented for Unicode literals. But i want to propose a break with tradition, because some more recent thinking on this has led me to become firmly convinced that SyntaxError is really the right thing to do in all of these cases. The strongest reason is that a long file with a typo in a string literal somewhere in hundreds of lines of code generates only ValueError: invalid \x escape with no indication to where the error is -- not even which file! I realize this could be hacked upon and fixed, but i think it points to a general inconsistency that ought to be considered and addressed. 1. SyntaxErrors are for compile-time errors. A problem with a literal happens before the program starts running, and it is useful for me, as the programmer, to know whether the error occurred because of some computational process, possibly depending on inputs, or whether it's a permanent mistake that's literally in my source code. In other words, will a debugger do me any good? 2. SyntaxErrors pinpoint the exact location of the problem. In principle, an error is a SyntaxError if and only if you can point to an exact character position as being the cause of the problem. 3. A ValueError means "i got a value that wasn't allowed or expected here". That is not at all what is happening. There is *no* defined value at all. It's not that there was a value and it was wrong -- the value was never even brought into existence. 4. The current implementation of ValueErrors is very unhelpful about what to do about an invalid literal, as explained in the example above. A SyntaxError would be much more useful. I hope you will agree with me that solving only #4 by changing ValueErrors so they behave a little more like SyntaxErrors in certain particular situations isn't the best solution. Also, switching to SyntaxError is likely to break very few things. You can't depend on catching a SyntaxError, precisely because it's a compile-time error. No one could possibly be using "except ValueError" to try to catch invalid literals in their code; that usage, just like "except SyntaxError:", makes sense only when someone is using "eval" or "exec" to interpret code that was generated or read from input. In fact, i bet switching to SyntaxError would actually make some code of the form "try: eval ... except SyntaxError" work better, since the single except clause would catch all possible compilation problems with the input to eval. -- ?!ng Happiness comes more from loving than being loved; and often when our affection seems wounded it is is only our vanity bleeding. To love, and to be hurt often, and to love again--this is the brave and happy life. -- J. E. Buchrose From guido at digicool.com Wed Feb 14 00:32:15 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 13 Feb 2001 18:32:15 -0500 Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: Your message of "Tue, 13 Feb 2001 15:11:10 PST." References: Message-ID: <200102132332.SAA18696@cj20424-a.reston1.va.home.com> > In support of the argument that bad literals should raise ValueError > (or a derived exception) rather than SyntaxError, Guido once said: > > > "Problems with literal interpretations > > traditionally raise 'runtime' exceptions rather than syntax errors." > > This is currently true of overflowing integers and string literals, > and hence it has also been so implemented for Unicode literals. > > But i want to propose a break with tradition, because some more recent > thinking on this has led me to become firmly convinced that SyntaxError > is really the right thing to do in all of these cases. > > The strongest reason is that a long file with a typo in a string > literal somewhere in hundreds of lines of code generates only > > ValueError: invalid \x escape > > with no indication to where the error is -- not even which file! > I realize this could be hacked upon and fixed, but i think it points > to a general inconsistency that ought to be considered and addressed. > > 1. SyntaxErrors are for compile-time errors. A problem with > a literal happens before the program starts running, and > it is useful for me, as the programmer, to know whether > the error occurred because of some computational process, > possibly depending on inputs, or whether it's a permanent > mistake that's literally in my source code. In other words, > will a debugger do me any good? > > 2. SyntaxErrors pinpoint the exact location of the problem. > In principle, an error is a SyntaxError if and only if you > can point to an exact character position as being the cause > of the problem. > > 3. A ValueError means "i got a value that wasn't allowed or > expected here". That is not at all what is happening. > There is *no* defined value at all. It's not that there > was a value and it was wrong -- the value was never even > brought into existence. > > 4. The current implementation of ValueErrors is very unhelpful > about what to do about an invalid literal, as explained > in the example above. A SyntaxError would be much more useful. > > I hope you will agree with me that solving only #4 by changing > ValueErrors so they behave a little more like SyntaxErrors in > certain particular situations isn't the best solution. > > Also, switching to SyntaxError is likely to break very few things. > You can't depend on catching a SyntaxError, precisely because it's > a compile-time error. No one could possibly be using "except ValueError" > to try to catch invalid literals in their code; that usage, just like > "except SyntaxError:", makes sense only when someone is using "eval" or > "exec" to interpret code that was generated or read from input. > > In fact, i bet switching to SyntaxError would actually make some code > of the form "try: eval ... except SyntaxError" work better, since the > single except clause would catch all possible compilation problems > with the input to eval. All good points, except that I still find it hard to flag overflow errors as syntax errors, especially since overflow is platform defined. On one platform, 1000000000000 is fine; on another it's a SyntaxError. That could be confusing. But you're absolutely right about string literals, and maybe it's OK if 1000000000000000000000000000000000000000000000000000000000000000000 is flagged as a syntax error. (After all it's missing a trailing 'L'.) Another solution (borrowing from C): automatically promote int literals to long if they can't be evaluated as ints. --Guido van Rossum (home page: http://www.python.org/~guido/) From greg at cosc.canterbury.ac.nz Wed Feb 14 00:43:16 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Wed, 14 Feb 2001 12:43:16 +1300 (NZDT) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <200102132332.SAA18696@cj20424-a.reston1.va.home.com> Message-ID: <200102132343.MAA05559@s454.cosc.canterbury.ac.nz> Guido: > I still find it hard to flag overflow > errors as syntax errors, especially since overflow is platform > defined. How about introducing the following hierarchy: CompileTimeError SyntaxError LiteralRangeError LiteralRangeError could inherit from ValueError as well if you want. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From tim.one at home.com Wed Feb 14 00:54:43 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 18:54:43 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <3A89B719.9CDB68B@lemburg.com> Message-ID: [MAL] > Since exception message are not defined anywhere I'd suggest > to simply ignore them in the output. [Tim] > Virtually nothing about Python's output is clearly defined, and for doc > purposes I want to capture what Python actually does. [MAL] > But what it does write to the console changes with every > release (e.g. just take the repr() changes for strings with > non-ASCII data)... So now you don't want to test exception messages *or* non-exceptional output either. That's fine, but you're never going to like doctest if so. > this simply breaks you test suite every time I think you're missing the point: it breaks your *docs*, if they contain any examples that rely on such stuff. doctest then very helpfully points out that your docs-- no matter how good they were before --now suck, because they're now *wrong*. It's not interested in assigning blame for that, it's enough to point out that they're now broken (else they'll never get fixed!). > Writing Python programs which work on Python 1.5-2.1 which at > the same time pass the doctest unit tests becomes impossible. Not true. You may need to rewrite your examples, though, so that your *docs* are accurate under all the releases you care about. I don't care if that drives you mad, so long as it prevents you from screwing your users with inaccurate docs. > The regression suite (and most other Python software) catches > exceptions based on the exception class -- why isn't this enough > for your doctest.py checks ? Because doctest is primarily interested in ensuring that non-exceptional cases continue to work exactly as advertised. Checking that, e.g., >>> fac(5) 120 still works is at least 10x easier to live with than writing crap like if fac(5) != 120: raise TestFailed("Something about fac failed but it's too " "painful to build up some string here " "explaining exactly what -- try single-" "stepping through the test by hand until " "this raise triggers.") That's regrtest.py-style testing, and if you think that's pleasant to work with you must never have seen a std test break <0.7 wink>. When a doctest'ed module breaks, the doctest output pinpoints the failure precisely, without any work on your part beyond simply capturing an interactive session that shows the results you expected. > nit-pickling-ly, Na, you're trying to force doctest into a mold it was designed to get as far away from as possible. Use it for its intended purpose, then gripe. Right now you're complaining that the elephant's eyes are the wrong color while missing that it's actually a leopard . From thomas at xs4all.net Wed Feb 14 00:57:16 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 14 Feb 2001 00:57:16 +0100 Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: ; from ping@lfw.org on Tue, Feb 13, 2001 at 03:11:10PM -0800 References: Message-ID: <20010214005716.D4924@xs4all.nl> On Tue, Feb 13, 2001 at 03:11:10PM -0800, Ka-Ping Yee wrote: > The strongest reason is that a long file with a typo in a string > literal somewhere in hundreds of lines of code generates only > ValueError: invalid \x escape > with no indication to where the error is -- not even which file! > I realize this could be hacked upon and fixed, but i think it points > to a general inconsistency that ought to be considered and addressed. This has nothing to do with the error being a ValueError, but with some (compile-time) errors not being promoted to 'full' errors. See https://sourceforge.net/patch/?func=detailpatch&patch_id=101782&group_id=5470 The same issue came up when importing modules that did 'from foo import *' in a function scope. > 1. SyntaxErrors are for compile-time errors. A problem with > a literal happens before the program starts running, and > it is useful for me, as the programmer, to know whether > the error occurred because of some computational process, > possibly depending on inputs, or whether it's a permanent > mistake that's literally in my source code. In other words, > will a debugger do me any good? Agreed. That could possibly be solved by a better description of the valueerrors in question, though. (The 'invalid \x escape' message seems pretty obvious a compiletime-error to me, but others might not.) > 2. SyntaxErrors pinpoint the exact location of the problem. > In principle, an error is a SyntaxError if and only if you > can point to an exact character position as being the cause > of the problem. See above. > 3. A ValueError means "i got a value that wasn't allowed or > expected here". That is not at all what is happening. > There is *no* defined value at all. It's not that there > was a value and it was wrong -- the value was never even > brought into existence. Not quite true. It wasn't *compiled*, but it's a literal, so it does exist. The problem is not the value of a compiled \x escape, but the value after the \x. > 4. The current implementation of ValueErrors is very unhelpful > about what to do about an invalid literal, as explained > in the example above. A SyntaxError would be much more useful. See #1 :) > I hope you will agree with me that solving only #4 by changing > ValueErrors so they behave a little more like SyntaxErrors in > certain particular situations isn't the best solution. I don't, really. The name 'ValueError' is exactly right: what is wrong (in the \x escape example) is the *value* of something (of the \x escape in question.) If a syntax error was raised, I would think something was wrong with the syntax. But the \x is placed in the right spot, inside a string literal. The string literal itself is placed right. Why would it be a syntax error ? > In fact, i bet switching to SyntaxError would actually make some code > of the form "try: eval ... except SyntaxError" work better, since the > single except clause would catch all possible compilation problems > with the input to eval. I'd say you want a 'CompilerError' superclass instead. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Wed Feb 14 01:13:47 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 19:13:47 -0500 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: <14985.46047.226447.573927@mace.lucasdigital.com> Message-ID: [Tommy] > sorry- BOTH blew up until I turned off optimization. OK, that rules out int->float conversion as the cause (one of the examples didn't do any conversions). That multiplication by 4 triggered it rules out that any IEEE exceptions are to blame either (mult by 4 doesn't even trigger the IEEE "inexact" exception). > now neither does. shall I turn opts back on and try a few more > cases? Yes, please, one more: 4.0 * 3.1 Or, if that works, go back to the failing 4.0 * math.exp(-0.5) In any failing case, can you jump into a debubber and get a stack trace? Do you happen to have WANT_SIGFPE_HANDLER #define'd when you compile Python on this platform? If so, it complicates the code a lot. I wonder about that because you got a "bus error", and when WANT_SIGFPE_HANDLER is #defined we get a whole pile of ugly setjmp/longjmp code that doesn't show up on my box. Another tack, as a temporary workaround: try disabling optimization only for Objects/floatobject.c. That will probably fix the problem, and if so that's enough of a workaround to get you unstuck while pursuing these other irritations. From cgw at alum.mit.edu Wed Feb 14 01:34:11 2001 From: cgw at alum.mit.edu (Charles G Waldman) Date: Tue, 13 Feb 2001 18:34:11 -0600 (CST) Subject: [Python-Dev] failure: 2.1a2 on HP-UX with native compiler Message-ID: <14985.53891.987696.686572@sirius.net.home> Allow me to start off with a personal note. I am no longer @fnal.gov, I have a new job which is very interesting and challenging but not particularly Python-related - [my new employer is geodesic.com] I will have much less time to devote to Python from now on, but I'm still interested, and since I have access to a lot of unusual hardware at my new job (Linux/360 anybody?) I am going to try to download and test alphas and betas as much as time permits. Along these lines, I tried building the 2.1a2 version on an SMP HP box: otto:Python-2.1a2$ uname -a HP-UX otto B.11.00 U 9000/800 137901547 unlimited-user license this box has both gcc and the native compiler installed, but not g++. I tried to configure with the command: otto:Python-2.1a2$ ./configure --without-gcc creating cache ./config.cache checking MACHDEP... hp-uxB checking for --without-gcc... yes checking for --with-cxx= ... no checking for c++... no checking for g++... no checking for gcc... gcc checking whether the C++ compiler (gcc ) works... no configure: error: installation or configuration problem: C++ compiler cannot create executables. Seems like the "--without-gcc" flag is being completely ignored! I'll try to track this down as time permits. From tim.one at home.com Wed Feb 14 02:24:00 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 20:24:00 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102132313.SAA18504@cj20424-a.reston1.va.home.com> Message-ID: [Guido] > You're lucky that doctest doesn't return dictionaries! For functions > that return dictionaries, it's much more natural *for documentation > purposes* to write > > >>> book() > {'Fred': 'mom', 'Ron': 'Snape'} > > than the necessary work-around. You may deny that's a problem, but > once we've explained dictionaries to our users, we can expect them to > understand that if they see instead > > >>> book() > {'Ron': 'Snape', 'Fred': 'mom'} > > they will understand that that's the same thing. Writing it this way > is easier to read than > > >>> book() == {'Ron': 'Snape', 'Fred': 'mom'} > 1 > > I always have to look twice when I see something like that. >>> sortdict(book()) {'Fred': 'mom', 'Ron': 'Snape'} Explicit is better etc. If I have a module that's going to be showing a lot of dict output, I'll write a little "sortdict" function at the top of the docs and explain why it's there. It's clear from c.l.py postings over the years that lots of people *don't* grasp that dicts are "unordered". Introducing a sortdict() function serves a useful pedagogical purpose for them too. More subtle than dicts for most people is examples showing floating-point output. This isn't reliable across platforms (and, e.g., it's no coincidence that most of the .ratio() etc examples in difflib.py are contrived to return values exactly representable in binary floating-point; but simple fractions like 3/4 are also easiest for people to visualize, so that also makes for good examples). > They [difflib.py docstring docs] are also more voluminous than I'd > like the docs for difflib to be... Not me -- there's nothing in them that I as a potential user don't need to know. But then I think the Library docs are too terse in general. Indeed, Fredrick makes part of his living selling a 300-page book supplying desperately needed Library examples <0.5 wink>. WRT difflib.py, it's OK by me if Fred throws out the examples when LaTeXing the module docstring, because a user can still get the info *from* the docstrings. For that matter, he may as well throw out everything except the first line or two of each method description, if you want bare-bones minimal docs for the manual. no-denying-that-examples-take-space-but-what's-painful-to-include- in-the-latex-docs-is-trivial-to-maintain-in-the-code-ly y'rs - tim From tommy at ilm.com Wed Feb 14 02:57:03 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Tue, 13 Feb 2001 17:57:03 -0800 (PST) Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: References: <14985.46047.226447.573927@mace.lucasdigital.com> Message-ID: <14985.58539.114838.36680@mace.lucasdigital.com> Tim Peters writes: | | > now neither does. shall I turn opts back on and try a few more | > cases? | | Yes, please, one more: | | 4.0 * 3.1 | | Or, if that works, go back to the failing | | 4.0 * math.exp(-0.5) both of these work, but changing the 4.0 to an integer 4 produces the bus error. so it is definitely a conversion to double/float thats the problem. | | In any failing case, can you jump into a debubber and get a stack trace? sure. I've included an entire dbx session at the end of this mail. | | Do you happen to have | | WANT_SIGFPE_HANDLER | | #define'd when you compile Python on this platform? If so, it complicates | the code a lot. I wonder about that because you got a "bus error", and when | WANT_SIGFPE_HANDLER is #defined we get a whole pile of ugly setjmp/longjmp | code that doesn't show up on my box. a peek at config.h shows the WANT_SIGFPE_HANDLER define commented out. should I turn it on and see what happens? | | Another tack, as a temporary workaround: try disabling optimization only | for Objects/floatobject.c. That will probably fix the problem, and if so | that's enough of a workaround to get you unstuck while pursuing these other | irritations. this one works just fine. workarounds aren't a problem for me right now since I'm in no hurry to get this version in use here. I'm just trying to help debug this version for irix users in general. ------------%< snip %<----------------------%< snip %<------------ (tommy at mace)/u0/tommy/pycvs/python/dist/src$ dbx python dbx version 7.3 65959_Jul11 patchSG0003841 Jul 11 2000 02:29:30 Executable /usr/u0/tommy/pycvs/python/dist/src/python (dbx) run Process 563746 (python) started Python 2.1a2 (#6, Feb 13 2001, 17:43:32) [C] on irix6 Type "copyright", "credits" or "license" for more information. >>> 3 * 4.0 12.0 >>> import math >>> 4 * math.exp(-.5) Process 563746 (python) stopped on signal SIGBUS: Bus error (default) at [float_mul:383 +0x4,0x1004c158] 383 CONVERT_TO_DOUBLE(v, a); (dbx) l >* 383 CONVERT_TO_DOUBLE(v, a); 384 CONVERT_TO_DOUBLE(w, b); 385 PyFPE_START_PROTECT("multiply", return 0) 386 a = a * b; 387 PyFPE_END_PROTECT(a) 388 return PyFloat_FromDouble(a); 389 } 390 391 static PyObject * 392 float_div(PyObject *v, PyObject *w) (dbx) t > 0 float_mul(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/floatobject.c":383, 0x1004c158] 1 binary_op1(0x100b69fc, 0x10116788, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":337, 0x1003ac5c] 2 binary_op(0x100b69fc, 0x10116788, 0x8, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":373, 0x1003ae70] 3 PyNumber_Multiply(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":544, 0x1003b5a4] 4 eval_code2(0x1012c688, 0x0, 0xffffffec, 0x0, 0x0, 0x0, 0x0, 0x0) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":896, 0x10034a54] 5 PyEval_EvalCode(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":336, 0x10031768] 6 run_node(0x100f88c0, 0x10116788, 0x0, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":931, 0x10040444] 7 PyRun_InteractiveOne(0x0, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":540, 0x1003f1f0] 8 PyRun_InteractiveLoop(0xfb4a398, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":486, 0x1003ef84] 9 PyRun_AnyFileEx(0xfb4a398, 0x100b1878, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":461, 0x1003eeac] 10 Py_Main(0x1, 0x0, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/main.c":292, 0x1000bba4] 11 main(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/python.c":10, 0x1000b7bc] More (n if no)?y 12 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x1000b558] (dbx) From fdrake at acm.org Wed Feb 14 04:10:20 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Tue, 13 Feb 2001 22:10:20 -0500 (EST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <200102132343.MAA05559@s454.cosc.canterbury.ac.nz> References: <200102132332.SAA18696@cj20424-a.reston1.va.home.com> <200102132343.MAA05559@s454.cosc.canterbury.ac.nz> Message-ID: <14985.63260.81788.746125@cj42289-a.reston1.va.home.com> Greg Ewing writes: > How about introducing the following hierarchy: > > CompileTimeError > SyntaxError > LiteralRangeError > > LiteralRangeError could inherit from ValueError as well > if you want. I like this! -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From tim.one at home.com Wed Feb 14 05:13:00 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 13 Feb 2001 23:13:00 -0500 Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: Message-ID: [Thomas Wouters] > ... what is wrong (in the \x escape example) is the *value* of > something (of the \x escape in question.) If a syntax error was > raised, I would think something was wrong with the syntax. But > the \x is placed in the right spot, inside a string literal. The > string literal itself is placed right. Why would it be a syntax > error ? Oh, why not . The syntax of an \x escape is "\\" "x" hexdigit hexdigit and to call something that doesn't match that syntax a SyntaxError isn't much of a stretch. Neither is calling it a ValueError. [Guido] > Another solution (borrowing from C): automatically promote int > literals to long if they can't be evaluated as ints. Yes! The user-visible distinction between ints and longs causes more problems than it solves. Would also get us one step closer to punting the incomprehensible "because the grammar implies it" answer to the FAQlet: Yo, Phyton d00dz! What's up with this? >>> x = "-2147483648" >>> int(x) -2147483648 >>> eval(x) Traceback (most recent call last): File " ", line 1, in ? OverflowError: integer literal too large >>> From skip at mojam.com Wed Feb 14 04:56:11 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 13 Feb 2001 21:56:11 -0600 (CST) Subject: [Python-Dev] random.jumpback? Message-ID: <14986.475.685764.347334@beluga.mojam.com> I was adding __all__ to the random module and I noticed this very unpythonic example in the module docstring: >>> g = Random(42) # arbitrary >>> g.random() 0.25420336316883324 >>> g.jumpahead(6953607871644L - 1) # move *back* one >>> g.random() 0.25420336316883324 Presuming backing up the seed is a reasonable thing to do (I haven't got much experience with random numbers), why doesn't the Random class have a jumpback method instead of forcing the user to know the magic number to use with jumpahead? def jumpback(self, n): return self.jumpahead(6953607871644L - n) Skip From skip at mojam.com Wed Feb 14 03:43:21 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 13 Feb 2001 20:43:21 -0600 (CST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: References: Message-ID: <14985.61641.213866.206076@beluga.mojam.com> I must admit to being unfamiliar with all the options available. How well does doctest work if the output of an example or test doesn't lend itself to execution at an interactive prompt? Skip From tim.one at home.com Wed Feb 14 06:34:35 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 00:34:35 -0500 Subject: [Python-Dev] random.jumpback? In-Reply-To: <14986.475.685764.347334@beluga.mojam.com> Message-ID: [Skip Montanaro] > I was adding __all__ to the random module and I noticed this very > unpythonic example in the module docstring: > > >>> g = Random(42) # arbitrary > >>> g.random() > 0.25420336316883324 > >>> g.jumpahead(6953607871644L - 1) # move *back* one > >>> g.random() > 0.25420336316883324 Did you miss the sentence preceding the example, starting "Just for fun"? > Presuming backing up the seed is a reasonable thing to do > ... It isn't -- it's just for fun. > (I haven't got much experience with random numbers), If you did, you would have been howling with joy at how much fun you were having . From tim.one at home.com Wed Feb 14 07:45:15 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 01:45:15 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102132145.QAA18076@cj20424-a.reston1.va.home.com> Message-ID: [Tim] > Not to my mind. doctest is intentionally picky about exact matches, > for reasons explained earlier. If the docs for a thing say "integer > division or modulo by zero" is expected, but running it says something > else, the docs are wrong and doctest's primary *purpose* is to point > that out loudly. [Guido] > Of course, this is means that *if* you use doctest, all authoritative > docs should be in the docstring, and not elsewhere. I don't know why you would reach that conclusion. My own Python work in years past had overwhelmingly little to do with anything in the Python distribution, and I surely did put all my docs in my modules. It was my only realistic choice, and doctest grew in part out of that "gotta put everything in one file, cuz one file is all I got" way of working. By allowing to put the docs for a thing right next to the tests for a thing right next to the code for a thing, doctest changed the *nature* of that compromise from a burden to a relative joy. Doesn't mean the docs couldn't or shouldn't be elsewhere, though, unless you assume that only the "authoritative docs" need to be accurate (I prefer that all docs tell the truth ). I know some people have adapted the guts of doctest to ensuring that their LaTeX and/or HTML Python examples work as advertised too. Cool! The Python Tutorial is eternally out of synch in little ways with what the matching release actually does. > Which brings us back to the eternal question of how to indicate > mark-up in docstrings. I announced a few years ago I was done waiting for mark-up to reach consensus, and was going to just go ahead and write useful docstrings regardless. Never had cause to regret that -- mark-up is the tail wagging the dog, and I don't know why people tolerate it (well, yes I do: "but there's no mark-up defined!" is an excuse to put off writing decent docs! but you really don't need six levels of nested lists-- or even one --to get 99% of the info across). > Is everything connected to everything? when-it's-convenient-to-believe-it-and-a-few-times-even-when-not-ly y'rs - tim From tim.one at home.com Wed Feb 14 07:52:37 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 01:52:37 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14985.61641.213866.206076@beluga.mojam.com> Message-ID: [Skip] > I must admit to being unfamiliar with all the options available. How > well does doctest work if the output of an example or test doesn't > lend itself to execution at an interactive prompt? If an indication of success/failure can't be produced on stdout, doctest is useless. OTOH, if you have any automatable way whatsoever to test a thing, I'm betting you could dream up a way to print yes or no to stdout accordingly. If not, you probably need to work on something other than your testing strategy first . From tim.one at home.com Wed Feb 14 10:14:11 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 04:14:11 -0500 Subject: [Python-Dev] failure: 2.1a2 on HP-UX with native compiler In-Reply-To: <14985.53891.987696.686572@sirius.net.home> Message-ID: [Charles G Waldman] > Allow me to start off with a personal note. OK, but only once per decade (my turn: I found a mole with an unusual color ). > I am no longer @fnal.gov, I have a new job which is very interesting > and challenging but not particularly Python-related - [my new employer > is geodesic.com] Cool! So give us a copy of Great Circle for free, and in turn we'll let you upgrade their website to Zope for free <0.9 wink>. > ... > Along these lines, I tried building the 2.1a2 version on an SMP HP > box: You are toooo brave, Charles! If you ever manage to get Python to compile on that box, do Guido a huge favor and figure out the right way to close the unending stream of "threads don't work on HP-UX" bugs. Few HP-UX users appear to be systems software developers, and that means we never get a clear picture about what the thread story is there -- except that they don't work (== won't even compile) for many users, and no contributed patch ever applied has managed to stop the complaints. After that, Linux/360 should be a vacation. if-geodesic-can-speed-cold-fusion-by-1200%-just-imagine-what- they-could-for-python-ly y'rs - tim From thomas at xs4all.net Wed Feb 14 10:32:58 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 14 Feb 2001 10:32:58 +0100 Subject: [Python-Dev] failure: 2.1a2 on HP-UX with native compiler In-Reply-To: <14985.53891.987696.686572@sirius.net.home>; from cgw@alum.mit.edu on Tue, Feb 13, 2001 at 06:34:11PM -0600 References: <14985.53891.987696.686572@sirius.net.home> Message-ID: <20010214103257.F4924@xs4all.nl> On Tue, Feb 13, 2001 at 06:34:11PM -0600, Charles G Waldman wrote: > this box has both gcc and the native compiler installed, but not g++. > I tried to configure with the command: > otto:Python-2.1a2$ ./configure --without-gcc > configure: error: installation or configuration problem: C++ compiler cannot create executables. > Seems like the "--without-gcc" flag is being completely ignored! Yes. --without-gcc is only used for the C compiler, not the C++ one. For the C++ compiler, if you do not specify '--with-cxx=...', configure uses the first existing program out of this list: $CCC c++ g++ gcc CC cxx cc++ cl The check to determine whether the chosen compiler actually works is made later, and if it doesn't work, it won't try the next one in the list. The solution is thus to provide a working CXX compiler using --with-cxx= . Two questions for python-dev (in particular autoconf-god Eric -- time to earn your pay! ;-) Is there a reason '$CXX' is not in the list of tested C++ compilers, even before $CCC ? That would allow CXX=c++-compiler ./configure to work. As for the other question: The --without-gcc usage message seems wrong: AC_ARG_WITH(gcc, [ --without-gcc never use gcc], [ Asside from '--without-gcc', you can also use '--with-gcc' and '--with-gcc= '. Is there a specific reason not to document that ? -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From andy at reportlab.com Wed Feb 14 10:49:29 2001 From: andy at reportlab.com (Andy Robinson) Date: Wed, 14 Feb 2001 09:49:29 -0000 Subject: [Python-Dev] Unit Testing in San Diego Message-ID: The O'Reilly Conference Committee needs proposals about a week ago for the conference in San Diego on July 24th-27th. I think there should be a short talk on unit testing, showing how to use PyUnit, Doctest, and Quixote if they have not all merged into one glorious unified whole by then. I can do this - we've used PyUnit a lot lately - but have other talks I'd rather concentrate on. Is there anyone here who will be there and would like to give such a talk? I'm sure the committee would welcome a submission. Andy Robinson CEO and Chief Architect, ReportLab Inc. From mal at lemburg.com Wed Feb 14 11:19:48 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed, 14 Feb 2001 11:19:48 +0100 Subject: [Python-Dev] SyntaxError for illegal literals References: <20010214005716.D4924@xs4all.nl> Message-ID: <3A8A5BC4.64298EA6@lemburg.com> Thomas Wouters wrote: > > On Tue, Feb 13, 2001 at 03:11:10PM -0800, Ka-Ping Yee wrote: > > > The strongest reason is that a long file with a typo in a string > > literal somewhere in hundreds of lines of code generates only > > > ValueError: invalid \x escape > > > with no indication to where the error is -- not even which file! > > I realize this could be hacked upon and fixed, but i think it points > > to a general inconsistency that ought to be considered and addressed. > > This has nothing to do with the error being a ValueError, but with some > (compile-time) errors not being promoted to 'full' errors. See > > https://sourceforge.net/patch/?func=detailpatch&patch_id=101782&group_id=5470 > > The same issue came up when importing modules that did 'from foo import *' > in a function scope. Right and I think this touches the core of the problem. SyntaxErrors produce a proper traceback while ValueErrors (and others) just print a single line which doesn't even have the filename or line number. I wonder why the PyErr_PrintEx() (pythonrun.c) error handler only tries to parse SyntaxErrors for .filename and .lineno parameters. Looking at compile.c these should be settable on all exception object (since these are now proper instances). Perhaps lifting the restriction in PyErr_PrintEx() and making the parse_syntax_error() API a little more robust might do the trick. Then the various direct PyErr_SetString() calls in compile.c should be converted to use com_error() instead (if possible). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From ping at lfw.org Wed Feb 14 12:08:29 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Wed, 14 Feb 2001 03:08:29 -0800 (PST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <3A8A5BC4.64298EA6@lemburg.com> Message-ID: I wrote: > The strongest reason is that a long file with a typo in a string > literal somewhere in hundreds of lines of code generates only > > ValueError: invalid \x escape > > with no indication to where the error is -- not even which file! Thomas Wouters wrote: > This has nothing to do with the error being a ValueError, but with some > (compile-time) errors not being promoted to 'full' errors. See I think they are entirely related. All ValueErrors should be run-time errors; a ValueError should never occur during compilation. The key issue is communicating clearly with the user, and that's just not what ValueError *means*. M.-A. Lemburg wrote: > Right and I think this touches the core of the problem. SyntaxErrors > produce a proper traceback while ValueErrors (and others) just print > a single line which doesn't even have the filename or line number. This follows sensibly from the fact that SyntaxErrors are always compile-time errors (and therefore have no traceback or frame at the level where the error occurred). ValueErrors are usually run-time errors, so .filename and .lineno attributes would be redundant; this information is already available in the associated frame object. > Perhaps lifting the restriction in PyErr_PrintEx() and making the > parse_syntax_error() API a little more robust might do the trick. > Then the various direct PyErr_SetString() calls in compile.c > should be converted to use com_error() instead (if possible). That sounds like a significant amount of work, and i'm not sure it's the right answer. If we just clarify the boundary by making sure make sure that all, and only, compile-time errors are SyntaxErrors, everything would work properly and the meaning of the various exception classes would be clearer. The only exceptions that don't currently conform, as far as i know, have to do with invalid literals. -- ?!ng From ping at lfw.org Wed Feb 14 12:21:51 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Wed, 14 Feb 2001 03:21:51 -0800 (PST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <20010214005716.D4924@xs4all.nl> Message-ID: On Wed, 14 Feb 2001, Thomas Wouters wrote: > > 3. A ValueError means "i got a value that wasn't allowed or > > expected here". That is not at all what is happening. > > There is *no* defined value at all. It's not that there > > was a value and it was wrong -- the value was never even > > brought into existence. > > Not quite true. It wasn't *compiled*, but it's a literal, so it does exist. > The problem is not the value of a compiled \x escape, but the value after > the \x. No, it doesn't exist -- not in the Python world, anyway. There is no Python object corresponding to the literal. That's what i meant by not existing. I think this is an okay choice of meaning for "exist", since, after all, the point of the language is to abstract away lower levels so programmers can think in that higher-level "Python world". > > I hope you will agree with me that solving only #4 by changing > > ValueErrors so they behave a little more like SyntaxErrors in > > certain particular situations isn't the best solution. > > I don't, really. The name 'ValueError' is exactly right: what is wrong (in > the \x escape example) is the *value* of something (of the \x escape in > question.) The previous paragraph pretty much answers this, but i'll clarify. My understanding of ValueError, as it holds in all other situations but this one, is that a Python value of the right type was supplied but it was otherwise wrong -- illegal, or unexpected, or something of that sort. The documentation on the exceptions module says: ValueError Raised when a built-in operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError. That doesn't apply to "\xgh" or 1982391879487124. > If a syntax error was raised, I would think something was wrong > with the syntax. But there is. "\x45" is syntax for the letter E. It generates the semantics "the character object with ordinal 69 (corresponding to the uppercase letter E in ASCII)". "\xgh" doesn't generate any semantics -- we stop before we get there, because the syntax is wrong. -- ?!ng From ping at lfw.org Wed Feb 14 12:31:34 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Wed, 14 Feb 2001 03:31:34 -0800 (PST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <200102132332.SAA18696@cj20424-a.reston1.va.home.com> Message-ID: On Tue, 13 Feb 2001, Guido van Rossum wrote: > All good points, except that I still find it hard to flag overflow > errors as syntax errors, especially since overflow is platform > defined. I know it may seem weird. I tend to see it as a consequence of the language definition, though, not as the wrong choice of error. If you had to write a truly platform-independent Python language definition (a worthwhile endeavour, by the way, especially given that there are already at least CPython, JPython, and stackless), the decision about this would have to be made there. > On one platform, 1000000000000 is fine; on another it's a > SyntaxError. That could be confusing. So far, Python is effectively defined in such a way that 100000000000 has a meaning on one platform and has no meaning on another. So, yeah, that's the way it is. > Another solution (borrowing from C): automatically promote int > literals to long if they can't be evaluated as ints. Quite reasonable, yes. But i'd go further than that. I think everyone so far has been in agreement that the division between ints and long ints should eventually be abolished, and we're just waiting for someone brave enough to come along and make it happen. I know i've got my fingers crossed. :) (And maybe after we deprecate 'L', we can deprecate capital 'J' on numbers and 'R', 'U' on strings too...) toowtdi-ly yours, -- ?!ng From ping at lfw.org Wed Feb 14 12:36:54 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Wed, 14 Feb 2001 03:36:54 -0800 (PST) Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: <200102132343.MAA05559@s454.cosc.canterbury.ac.nz> Message-ID: On Wed, 14 Feb 2001, Greg Ewing wrote: > How about introducing the following hierarchy: > > CompileTimeError > SyntaxError > LiteralRangeError > > LiteralRangeError could inherit from ValueError as well > if you want. I suppose that's all right, and i wouldn't complain, but i don't think it's all that necessary either. Compile-time errors *are* syntax errors. What else could they be? (Aside from fatal errors or limitations of the compiler implementation, that is, but again that's outside of the abstraction we're presenting to the Python user.) Think of it this way: if there's a problem with your Python program, it's either a problem with *how* it expresses something (syntax), or with *what* it expresses (semantics). The syntactic errors occur at compile-time and the semantic errors occur at run-time. -- ?!ng From mal at lemburg.com Wed Feb 14 13:00:42 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed, 14 Feb 2001 13:00:42 +0100 Subject: [Python-Dev] SyntaxError for illegal literals References: Message-ID: <3A8A736A.917F7D41@lemburg.com> Ka-Ping Yee wrote: > > I wrote: > > The strongest reason is that a long file with a typo in a string > > literal somewhere in hundreds of lines of code generates only > > > > ValueError: invalid \x escape > > > > with no indication to where the error is -- not even which file! > > Thomas Wouters wrote: > > This has nothing to do with the error being a ValueError, but with some > > (compile-time) errors not being promoted to 'full' errors. See > > I think they are entirely related. All ValueErrors should be run-time > errors; a ValueError should never occur during compilation. The key > issue is communicating clearly with the user, and that's just not what > ValueError *means*. > > M.-A. Lemburg wrote: > > Right and I think this touches the core of the problem. SyntaxErrors > > produce a proper traceback while ValueErrors (and others) just print > > a single line which doesn't even have the filename or line number. > > This follows sensibly from the fact that SyntaxErrors are always > compile-time errors (and therefore have no traceback or frame at the > level where the error occurred). ValueErrors are usually run-time > errors, so .filename and .lineno attributes would be redundant; > this information is already available in the associated frame object. Those attributes are added to the error object by set_error_location() in compile.c. Since the error objects are Python instances, the function will set those attribute on any error which the compiler raises and IMHO, this would be a good thing. > > Perhaps lifting the restriction in PyErr_PrintEx() and making the > > parse_syntax_error() API a little more robust might do the trick. > > Then the various direct PyErr_SetString() calls in compile.c > > should be converted to use com_error() instead (if possible). > > That sounds like a significant amount of work, and i'm not sure it's > the right answer. Changing all compile time errors to SyntaxError requires much the same amount of work... you'd have to either modify the code to use com_error() or check for errors and then redirect them to com_error() (e.g. for codec errors). > If we just clarify the boundary by making sure > make sure that all, and only, compile-time errors are SyntaxErrors, > everything would work properly and the meaning of the various > exception classes would be clearer. The only exceptions that don't > currently conform, as far as i know, have to do with invalid literals. Well, there are also system and memory errors and the codecs are free to raise any other kind of error as well. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From guido at digicool.com Wed Feb 14 14:52:27 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 14 Feb 2001 08:52:27 -0500 Subject: [Python-Dev] random.jumpback? In-Reply-To: Your message of "Wed, 14 Feb 2001 00:34:35 EST." References: Message-ID: <200102141352.IAA22006@cj20424-a.reston1.va.home.com> > [Skip Montanaro] > > I was adding __all__ to the random module and I noticed this very > > unpythonic example in the module docstring: > > > > >>> g = Random(42) # arbitrary > > >>> g.random() > > 0.25420336316883324 > > >>> g.jumpahead(6953607871644L - 1) # move *back* one > > >>> g.random() > > 0.25420336316883324 [Tim] > Did you miss the sentence preceding the example, starting "Just for fun"? In that vein, the example isn't compatible with doctest, is it? --Guido van Rossum (home page: http://www.python.org/~guido/) From sjoerd at oratrix.nl Wed Feb 14 14:56:16 2001 From: sjoerd at oratrix.nl (Sjoerd Mullender) Date: Wed, 14 Feb 2001 14:56:16 +0100 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: Your message of Tue, 13 Feb 2001 17:57:03 -0800. <14985.58539.114838.36680@mace.lucasdigital.com> References: <14985.46047.226447.573927@mace.lucasdigital.com> <14985.58539.114838.36680@mace.lucasdigital.com> Message-ID: <20010214135617.A99853021C2@bireme.oratrix.nl> As an extra datapoint: I just tried this (4 * math.exp(-0.5)) on my SGI O2 and on our SGI file server with the current CVS version of Python, compiled with -O. I don't get a crash. I am running IRIX 6.5.10m on the O2 and 6.5.2m on the server. What version are you running? On Tue, Feb 13 2001 Flying Cougar Burnette wrote: > Tim Peters writes: > | > | > now neither does. shall I turn opts back on and try a few more > | > cases? > | > | Yes, please, one more: > | > | 4.0 * 3.1 > | > | Or, if that works, go back to the failing > | > | 4.0 * math.exp(-0.5) > > both of these work, but changing the 4.0 to an integer 4 produces the > bus error. so it is definitely a conversion to double/float thats > the problem. > > | > | In any failing case, can you jump into a debubber and get a stack trace? > > sure. I've included an entire dbx session at the end of this mail. > > | > | Do you happen to have > | > | WANT_SIGFPE_HANDLER > | > | #define'd when you compile Python on this platform? If so, it complicates > | the code a lot. I wonder about that because you got a "bus error", and when > | WANT_SIGFPE_HANDLER is #defined we get a whole pile of ugly setjmp/longjmp > | code that doesn't show up on my box. > > a peek at config.h shows the WANT_SIGFPE_HANDLER define commented > out. should I turn it on and see what happens? > > > | > | Another tack, as a temporary workaround: try disabling optimization only > | for Objects/floatobject.c. That will probably fix the problem, and if so > | that's enough of a workaround to get you unstuck while pursuing these other > | irritations. > > this one works just fine. workarounds aren't a problem for me right > now since I'm in no hurry to get this version in use here. I'm just > trying to help debug this version for irix users in general. > > > ------------%< snip %<----------------------%< snip %<------------ > > (tommy at mace)/u0/tommy/pycvs/python/dist/src$ dbx python > dbx version 7.3 65959_Jul11 patchSG0003841 Jul 11 2000 02:29:30 > Executable /usr/u0/tommy/pycvs/python/dist/src/python > (dbx) run > Process 563746 (python) started > Python 2.1a2 (#6, Feb 13 2001, 17:43:32) [C] on irix6 > Type "copyright", "credits" or "license" for more information. > >>> 3 * 4.0 > 12.0 > >>> import math > >>> 4 * math.exp(-.5) > Process 563746 (python) stopped on signal SIGBUS: Bus error (default) at [float_mul:383 +0x4,0x1004c158] > 383 CONVERT_TO_DOUBLE(v, a); > (dbx) l > >* 383 CONVERT_TO_DOUBLE(v, a); > 384 CONVERT_TO_DOUBLE(w, b); > 385 PyFPE_START_PROTECT("multiply", return 0) > 386 a = a * b; > 387 PyFPE_END_PROTECT(a) > 388 return PyFloat_FromDouble(a); > 389 } > 390 > 391 static PyObject * > 392 float_div(PyObject *v, PyObject *w) > (dbx) t > > 0 float_mul(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/floatobject.c":383, 0x1004c158] > 1 binary_op1(0x100b69fc, 0x10116788, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":337, 0x1003ac5c] > 2 binary_op(0x100b69fc, 0x10116788, 0x8, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":373, 0x1003ae70] > 3 PyNumber_Multiply(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":544, 0x1003b5a4] > 4 eval_code2(0x1012c688, 0x0, 0xffffffec, 0x0, 0x0, 0x0, 0x0, 0x0) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":896, 0x10034a54] > 5 PyEval_EvalCode(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":336, 0x10031768] > 6 run_node(0x100f88c0, 0x10116788, 0x0, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":931, 0x10040444] > 7 PyRun_InteractiveOne(0x0, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":540, 0x1003f1f0] > 8 PyRun_InteractiveLoop(0xfb4a398, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":486, 0x1003ef84] > 9 PyRun_AnyFileEx(0xfb4a398, 0x100b1878, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":461, 0x1003eeac] > 10 Py_Main(0x1, 0x0, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/main.c":292, 0x1000bba4] > 11 main(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/python.c":10, 0x1000b7bc] > More (n if no)?y > 12 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x1000b558] > (dbx) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > -- Sjoerd Mullender From moshez at zadka.site.co.il Wed Feb 14 17:47:17 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Wed, 14 Feb 2001 18:47:17 +0200 (IST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <200102132145.QAA18076@cj20424-a.reston1.va.home.com> References: <200102132145.QAA18076@cj20424-a.reston1.va.home.com>, Message-ID: <20010214164717.24AA1A840@darjeeling.zadka.site.co.il> On Tue, 13 Feb 2001 16:45:56 -0500, Guido van Rossum wrote: > Of course, this is means that *if* you use doctest, all authoritative > docs should be in the docstring, and not elsewhere. Which brings us > back to the eternal question of how to indicate mark-up in > docstrings. Is everything connected to everything? No, but a lot of things are connected to documentation. As someone who works primarily in Perl nowadays, and hates it, I must say that as horrible and unaesthetic pod is, having perldoc package::module Just work is worth everything -- I've marked everything I wrote that way, and I can't begin to explain how much it helps. I'm slowly starting to think that the big problem with Python documentation is that you didn't pronounce. So, if some publisher needs to work harder to make dead-trees copies, it's fine by me, and even if the output looks a bit less "professional" it's also fine by me, as long as documentation is always in the same format, and always accessible by the same command. Consider this an offer to help to port (manually, if needs be) Python's current documentation. We had a DevDay, we have a sig, we have a PEP. None of this seems to help -- what we need is a BDFL's pronouncement, even if it's on the worst solution possibly imaginable. -- For public key: finger moshez at debian.org | gpg --import "Debian -- What your mother would use if it was 20 times easier" LUKE: Is Perl better than Python? YODA: No... no... no. Quicker, easier, more seductive. From moshez at zadka.site.co.il Wed Feb 14 17:57:35 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Wed, 14 Feb 2001 18:57:35 +0200 (IST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: References: Message-ID: <20010214165735.C6AB4A840@darjeeling.zadka.site.co.il> On Tue, 13 Feb 2001 20:24:00 -0500, "Tim Peters" wrote: > Not me -- there's nothing in them that I as a potential user don't need to > know. But then I think the Library docs are too terse in general. Indeed, > Fredrick makes part of his living selling a 300-page book supplying > desperately needed Library examples <0.5 wink>. I'm sorry, Tim, that's just too true. I want to explain my view about how it happened (I wrote some of them, and if you find a particularily terse one, just assume it's me) -- I write tersely. My boss yelled at me when doing this at work, and I redid all my internal documentation -- doubled the line count, beefed up with examples, etc. He actually submitted a bug in the internal bug tracking system to get me to do that ;-) So, I suggest you do the same -- there's no excuse for terseness, other then not-having-time, so it's really important that bugs like that are files. Something like "documentation for xxxlib is too terse". I can't promise I'll fix all these bugs, but I can try ;-) -- For public key: finger moshez at debian.org | gpg --import "Debian -- What your mother would use if it was 20 times easier" LUKE: Is Perl better than Python? YODA: No... no... no. Quicker, easier, more seductive. From fdrake at acm.org Wed Feb 14 18:40:47 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Wed, 14 Feb 2001 12:40:47 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010214165735.C6AB4A840@darjeeling.zadka.site.co.il> References: <20010214165735.C6AB4A840@darjeeling.zadka.site.co.il> Message-ID: <14986.49951.471539.196962@cj42289-a.reston1.va.home.com> Moshe Zadka writes: > so it's really important that bugs like that are files. Something like > "documentation for xxxlib is too terse". I can't promise I'll fix all these > bugs, but I can try ;-) It would also be useful to tell what additional information you were looking for. We can probably find additional stuff to write on a lot of these, but that doesn't mean we'll interpret "too terse" in the most useful way. ;-) -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From tommy at ilm.com Wed Feb 14 18:57:24 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Wed, 14 Feb 2001 09:57:24 -0800 (PST) Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: <20010214135617.A99853021C2@bireme.oratrix.nl> References: <14985.46047.226447.573927@mace.lucasdigital.com> <14985.58539.114838.36680@mace.lucasdigital.com> <20010214135617.A99853021C2@bireme.oratrix.nl> Message-ID: <14986.49383.668942.359843@mace.lucasdigital.com> 'uname -a' tells me I'm running plain old 6.5 on my R10k O2 with version 7.3.1.1m of the sgi compiler. Which version of the compiler do you have? That might be the real culprit here. in fact... I just hopped onto a co-worker's machine that has version 7.3.1.2m of the compiler, remade everything, and the problem is gone. I think we can chalk this up to a compiler bug and take no further action. Thanks for listening... Sjoerd Mullender writes: | As an extra datapoint: | | I just tried this (4 * math.exp(-0.5)) on my SGI O2 and on our SGI | file server with the current CVS version of Python, compiled with -O. | I don't get a crash. | | I am running IRIX 6.5.10m on the O2 and 6.5.2m on the server. What | version are you running? | | On Tue, Feb 13 2001 Flying Cougar Burnette wrote: | | > Tim Peters writes: | > | | > | > now neither does. shall I turn opts back on and try a few more | > | > cases? | > | | > | Yes, please, one more: | > | | > | 4.0 * 3.1 | > | | > | Or, if that works, go back to the failing | > | | > | 4.0 * math.exp(-0.5) | > | > both of these work, but changing the 4.0 to an integer 4 produces the | > bus error. so it is definitely a conversion to double/float thats | > the problem. | > | > | | > | In any failing case, can you jump into a debubber and get a stack trace? | > | > sure. I've included an entire dbx session at the end of this mail. | > | > | | > | Do you happen to have | > | | > | WANT_SIGFPE_HANDLER | > | | > | #define'd when you compile Python on this platform? If so, it complicates | > | the code a lot. I wonder about that because you got a "bus error", and when | > | WANT_SIGFPE_HANDLER is #defined we get a whole pile of ugly setjmp/longjmp | > | code that doesn't show up on my box. | > | > a peek at config.h shows the WANT_SIGFPE_HANDLER define commented | > out. should I turn it on and see what happens? | > | > | > | | > | Another tack, as a temporary workaround: try disabling optimization only | > | for Objects/floatobject.c. That will probably fix the problem, and if so | > | that's enough of a workaround to get you unstuck while pursuing these other | > | irritations. | > | > this one works just fine. workarounds aren't a problem for me right | > now since I'm in no hurry to get this version in use here. I'm just | > trying to help debug this version for irix users in general. | > | > | > ------------%< snip %<----------------------%< snip %<------------ | > | > (tommy at mace)/u0/tommy/pycvs/python/dist/src$ dbx python | > dbx version 7.3 65959_Jul11 patchSG0003841 Jul 11 2000 02:29:30 | > Executable /usr/u0/tommy/pycvs/python/dist/src/python | > (dbx) run | > Process 563746 (python) started | > Python 2.1a2 (#6, Feb 13 2001, 17:43:32) [C] on irix6 | > Type "copyright", "credits" or "license" for more information. | > >>> 3 * 4.0 | > 12.0 | > >>> import math | > >>> 4 * math.exp(-.5) | > Process 563746 (python) stopped on signal SIGBUS: Bus error (default) at [float_mul:383 +0x4,0x1004c158] | > 383 CONVERT_TO_DOUBLE(v, a); | > (dbx) l | > >* 383 CONVERT_TO_DOUBLE(v, a); | > 384 CONVERT_TO_DOUBLE(w, b); | > 385 PyFPE_START_PROTECT("multiply", return 0) | > 386 a = a * b; | > 387 PyFPE_END_PROTECT(a) | > 388 return PyFloat_FromDouble(a); | > 389 } | > 390 | > 391 static PyObject * | > 392 float_div(PyObject *v, PyObject *w) | > (dbx) t | > > 0 float_mul(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/floatobject.c":383, 0x1004c158] | > 1 binary_op1(0x100b69fc, 0x10116788, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":337, 0x1003ac5c] | > 2 binary_op(0x100b69fc, 0x10116788, 0x8, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":373, 0x1003ae70] | > 3 PyNumber_Multiply(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":544, 0x1003b5a4] | > 4 eval_code2(0x1012c688, 0x0, 0xffffffec, 0x0, 0x0, 0x0, 0x0, 0x0) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":896, 0x10034a54] | > 5 PyEval_EvalCode(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":336, 0x10031768] | > 6 run_node(0x100f88c0, 0x10116788, 0x0, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":931, 0x10040444] | > 7 PyRun_InteractiveOne(0x0, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":540, 0x1003f1f0] | > 8 PyRun_InteractiveLoop(0xfb4a398, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":486, 0x1003ef84] | > 9 PyRun_AnyFileEx(0xfb4a398, 0x100b1878, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":461, 0x1003eeac] | > 10 Py_Main(0x1, 0x0, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/main.c":292, 0x1000bba4] | > 11 main(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/python.c":10, 0x1000b7bc] | > More (n if no)?y | > 12 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x1000b558] | > (dbx) | > | > _______________________________________________ | > Python-Dev mailing list | > Python-Dev at python.org | > http://mail.python.org/mailman/listinfo/python-dev | > | | -- Sjoerd Mullender From tim.one at home.com Wed Feb 14 21:02:44 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 15:02:44 -0500 Subject: [Python-Dev] random.jumpback? In-Reply-To: <200102141352.IAA22006@cj20424-a.reston1.va.home.com> Message-ID: [Skip Montanaro] >>> I was adding __all__ to the random module and I noticed this very >>> unpythonic example in the module docstring: >>> >>> >>> g = Random(42) # arbitrary >>> >>> g.random() >>> 0.25420336316883324 >>> >>> g.jumpahead(6953607871644L - 1) # move *back* one >>> >>> g.random() >>> 0.25420336316883324 [Tim] >> Did you miss the sentence preceding the example, starting "Just >> for fun"? [Guido] > In that vein, the example isn't compatible with doctest, is it? I'm not sure what you're asking. The example *works* under doctest, although random.py is not a doctest'ed module (it has an "eyeball test" at the end, and you have to be an expert to guess whether or not "it worked" from staring at the output -- not my doing, and way non-trivial to automate). So it's compatible in the "it works" sense, although it's vulnerable to x-platform fp output vagaries in the last few bits. If random.py ever gets doctest'ed, I'll fix that. Or maybe you're saying that a "just for fun" example doesn't need to be accurate? I'd disagree with that, but am not sure that's what you're saying, so won't disagree just yet . From fdrake at users.sourceforge.net Wed Feb 14 22:04:29 2001 From: fdrake at users.sourceforge.net (Fred L. Drake) Date: Wed, 14 Feb 2001 13:04:29 -0800 Subject: [Python-Dev] [development doc updates] Message-ID: The development version of the documentation has been updated: http://python.sourceforge.net/devel-docs/ From fredrik at effbot.org Wed Feb 14 22:14:27 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Wed, 14 Feb 2001 22:14:27 +0100 Subject: [Python-Dev] threads and gethostbyname Message-ID: <041201c096cb$1f46e040$e46940d5@hagrid> We have a Tkinter-based application that does DNS lookups (using socket.gethostbyname) in a background thread. Under 1.5.2, this worked without a hitch. However, under 2.0, the same program tends to lock up on some computers. I'm not 100% sure (it's a bit hard to debug), but it looks like a global lock problem... Any ideas? Is this supposed to work at all? Cheers /F From skip at mojam.com Wed Feb 14 22:24:50 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 14 Feb 2001 15:24:50 -0600 (CST) Subject: [Python-Dev] random.jumpback? In-Reply-To: References: <200102141352.IAA22006@cj20424-a.reston1.va.home.com> Message-ID: <14986.63394.543321.783056@beluga.mojam.com> [Skip] I was adding __all__ to the random module and I noticed this very unpythonic example in the module docstring: [Tim] Did you miss the sentence preceding the example, starting "Just for fun"? I did, yes. [Guido] In that vein, the example isn't compatible with doctest, is it? [Tim] I'm not sure what you're asking. I interpreted Guido's comment to mean, "why include a useless example in documentation?" I guess that was my implicit assumption as well (again, ignoring the missed "just for fun" quote). Either it's a useful example embedded in the documentation or it's a test case that is perhaps not likely to be useful to an end user in which case it should be accessed via the module's __test__ dictionary. guido-did-i-channel-you-properly-ly? yr's, Skip From mwh21 at cam.ac.uk Wed Feb 14 23:36:18 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 14 Feb 2001 22:36:18 +0000 Subject: [Python-Dev] python-dev summaries? Message-ID: I notice that it's nearly a fortnight since AMK's last summary. I've started to put together a sumamry of the last two weeks, but I thought I'd ask first if anyone else was planning to do the same. I'd gladly concede the tediu^Wbragging rights to someone else, although I would like the chance get something out if the evening I spent writing code to do things like this: Number of articles in summary: 495 80 | ]|[ | ]|[ | ]|[ | ]|[ | ]|[ ]|[ 60 | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ 40 | ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 20 | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 0 +-029-067-039-037-080-048-020-009-040-021-008-030-043-024 Thu 01| Sat 03| Mon 05| Wed 07| Fri 09| Sun 11| Tue 13| Fri 02 Sun 04 Tue 06 Thu 08 Sat 10 Mon 12 Wed 14 If noone else is planning on doing a sumamry, I'll post a draft for comments sometime tomorrow. Cheers, M. -- I'm sorry, was my bias showing again? :-) -- William Tanksley, 13 May 2000 From tim.one at home.com Thu Feb 15 00:26:14 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 14 Feb 2001 18:26:14 -0500 Subject: [Python-Dev] random.jumpback? In-Reply-To: <14986.63394.543321.783056@beluga.mojam.com> Message-ID: [Skip] > I interpreted Guido's comment to mean, "why include a useless example in > documentation?" I guess that was my implicit assumption as well (again, > ignoring the missed "just for fun" quote). Either it's a useful example > embedded in the documentation or it's a test case that is perhaps not > likely to be useful to an end user in which case it should be accessed > via the module's __test__ dictionary. The example is not useful in practice, but is useful pedagogically, for someone who reads the example *in context*: + It makes concrete that .jumpahead() is fast for even monstrously large arguments (try it! it didn't even make you curious?). + It makes concrete that the period of the RNG definitely can be exhausted (something which earlier docstring text warned about in the context of threads, but abstractly). + It concretely demonstrates that the true period is at worst a factor of the documented period, something paranoid users want assurance about because they know from bitter experience that documented periods are often wrong (indeed, Wichmann and Hill made a bogus claim about the period of *this* generator when they first introduced it). A knowledgable user can build on that example to prove to themself quickly that the period is exactly as documented. + If anyone is under the illusion (and many are) that this kind of RNG is good for crypto work, the demonstrated trivial ease with which .jumpahead can move to any point in the sequence-- even trillions of elements ahead --should give them strong cause for healthy doubt. Cranking out cookies is useful, but teaching the interested reader something about the nature of the cookie machine is also useful, albeit in a different sense. unrepentantly y'rs - tim From jeremy at alum.mit.edu Wed Feb 14 22:32:10 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 14 Feb 2001 16:32:10 -0500 (EST) Subject: [Python-Dev] random.jumpback? In-Reply-To: References: <14986.63394.543321.783056@beluga.mojam.com> Message-ID: <14986.63834.23401.827764@w221.z064000254.bwi-md.dsl.cnc.net> I thought it was an excellent example for exactly the reasons Tim mentioned. I didn't try it, but I did wonder how long it would take :-). Jeremy From tim.one at home.com Thu Feb 15 09:00:49 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 15 Feb 2001 03:00:49 -0500 Subject: [Python-Dev] python-dev summaries? In-Reply-To: Message-ID: [Michael Hudson, graduates from bytecodes to ASCII art] > ... > If noone else is planning on doing a sumamry, I'll post a draft for > comments sometime tomorrow. 1. If you solicit comments, it will be 3 months of debate before you get to post the thing <0.8 wink>. Just Do It. 2. Bless you! to-be-safe-simply-concatenate-all-the-msgs-and-post-the-whole- blob-without-comment-ly y'rs - tim From thomas at xs4all.net Thu Feb 15 09:05:51 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Thu, 15 Feb 2001 09:05:51 +0100 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010214165735.C6AB4A840@darjeeling.zadka.site.co.il>; from moshez@zadka.site.co.il on Wed, Feb 14, 2001 at 06:57:35PM +0200 References: <20010214165735.C6AB4A840@darjeeling.zadka.site.co.il> Message-ID: <20010215090551.J4924@xs4all.nl> On Wed, Feb 14, 2001 at 06:57:35PM +0200, Moshe Zadka wrote: > On Tue, 13 Feb 2001 20:24:00 -0500, "Tim Peters" wrote: > > Not me -- there's nothing in them that I as a potential user don't need to > > know. But then I think the Library docs are too terse in general. Indeed, > > Fredrick makes part of his living selling a 300-page book supplying > > desperately needed Library examples <0.5 wink>. > I'm sorry, Tim, that's just too true. You should be appologizing to Fred, not Tim :) While I agree with the both of you, I'm not sure if expanding the library reference is going to help the problem. I think what's missing is a library *tutorial*. The reference is exactly that, a reference, and if we expand the reference we'll end up cursing it ourself, should we ever need it. (okay, so noone here needs the reference anymore except me, but when looking at the reference, I like the terse descriptions of the modules. They're just reminders anyway.) I remember when I'd finished the Python tutorial and wondered where to go next. I tried reading the library reference, but it was boring and most of it not interesting (since it isn't built up to go from useful/common -> rare, but just a list of all modules ordered by 'service'.) I ended up doing the slow and cheap version of Fredrik's book: reading python-list ;) I'll write the library tutorial once I finish the 'from-foo-import-* considered harmful' chapter ;-) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Thu Feb 15 09:35:00 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 15 Feb 2001 03:35:00 -0500 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: <14986.49383.668942.359843@mace.lucasdigital.com> Message-ID: [Flying Cougar Burnette] > 'uname -a' tells me I'm running plain old 6.5 on my R10k O2 with > version 7.3.1.1m of the sgi compiler. > ... > I just hopped onto a co-worker's machine that has version 7.3.1.2m of > the compiler, remade everything, and the problem is gone. Oh, of course. Why didn't you say so? Micro-micro version 7.3.1.2m of the SGI compiler fixed a bus error when doing int->float conversion. What? You don't believe me? Harrumph -- you just proved it . thanks-for-playing-and-pick-up-a-fabulous-prize-at-the-door-ly y'rs - tim From sjoerd at oratrix.nl Thu Feb 15 09:42:35 2001 From: sjoerd at oratrix.nl (Sjoerd Mullender) Date: Thu, 15 Feb 2001 09:42:35 +0100 Subject: [Python-Dev] troubling math bug under IRIX 6.5 In-Reply-To: Your message of Wed, 14 Feb 2001 09:57:24 -0800. <14986.49383.668942.359843@mace.lucasdigital.com> References: <14985.46047.226447.573927@mace.lucasdigital.com> <14985.58539.114838.36680@mace.lucasdigital.com> <20010214135617.A99853021C2@bireme.oratrix.nl> <14986.49383.668942.359843@mace.lucasdigital.com> Message-ID: <20010215084236.B1D823021C2@bireme.oratrix.nl> I have compiler version 7.2.1.3m om my O2 and 7.2.1 on the server. It does indeed sound like a compiler problem, so maybe it's time to do an upgrade... On Wed, Feb 14 2001 Flying Cougar Burnette wrote: > > 'uname -a' tells me I'm running plain old 6.5 on my R10k O2 with > version 7.3.1.1m of the sgi compiler. Which version of the compiler > do you have? That might be the real culprit here. in fact... > > I just hopped onto a co-worker's machine that has version 7.3.1.2m of > the compiler, remade everything, and the problem is gone. > > I think we can chalk this up to a compiler bug and take no further > action. Thanks for listening... > > > Sjoerd Mullender writes: > | As an extra datapoint: > | > | I just tried this (4 * math.exp(-0.5)) on my SGI O2 and on our SGI > | file server with the current CVS version of Python, compiled with -O. > | I don't get a crash. > | > | I am running IRIX 6.5.10m on the O2 and 6.5.2m on the server. What > | version are you running? > | > | On Tue, Feb 13 2001 Flying Cougar Burnette wrote: > | > | > Tim Peters writes: > | > | > | > | > now neither does. shall I turn opts back on and try a few more > | > | > cases? > | > | > | > | Yes, please, one more: > | > | > | > | 4.0 * 3.1 > | > | > | > | Or, if that works, go back to the failing > | > | > | > | 4.0 * math.exp(-0.5) > | > > | > both of these work, but changing the 4.0 to an integer 4 produces the > | > bus error. so it is definitely a conversion to double/float thats > | > the problem. > | > > | > | > | > | In any failing case, can you jump into a debubber and get a stack trace? > | > > | > sure. I've included an entire dbx session at the end of this mail. > | > > | > | > | > | Do you happen to have > | > | > | > | WANT_SIGFPE_HANDLER > | > | > | > | #define'd when you compile Python on this platform? If so, it complicates > | > | the code a lot. I wonder about that because you got a "bus error", and when > | > | WANT_SIGFPE_HANDLER is #defined we get a whole pile of ugly setjmp/longjmp > | > | code that doesn't show up on my box. > | > > | > a peek at config.h shows the WANT_SIGFPE_HANDLER define commented > | > out. should I turn it on and see what happens? > | > > | > > | > | > | > | Another tack, as a temporary workaround: try disabling optimization only > | > | for Objects/floatobject.c. That will probably fix the problem, and if so > | > | that's enough of a workaround to get you unstuck while pursuing these other > | > | irritations. > | > > | > this one works just fine. workarounds aren't a problem for me right > | > now since I'm in no hurry to get this version in use here. I'm just > | > trying to help debug this version for irix users in general. > | > > | > > | > ------------%< snip %<----------------------%< snip %<------------ > | > > | > (tommy at mace)/u0/tommy/pycvs/python/dist/src$ dbx python > | > dbx version 7.3 65959_Jul11 patchSG0003841 Jul 11 2000 02:29:30 > | > Executable /usr/u0/tommy/pycvs/python/dist/src/python > | > (dbx) run > | > Process 563746 (python) started > | > Python 2.1a2 (#6, Feb 13 2001, 17:43:32) [C] on irix6 > | > Type "copyright", "credits" or "license" for more information. > | > >>> 3 * 4.0 > | > 12.0 > | > >>> import math > | > >>> 4 * math.exp(-.5) > | > Process 563746 (python) stopped on signal SIGBUS: Bus error (default) at [float_mul:383 +0x4,0x1004c158] > | > 383 CONVERT_TO_DOUBLE(v, a); > | > (dbx) l > | > >* 383 CONVERT_TO_DOUBLE(v, a); > | > 384 CONVERT_TO_DOUBLE(w, b); > | > 385 PyFPE_START_PROTECT("multiply", return 0) > | > 386 a = a * b; > | > 387 PyFPE_END_PROTECT(a) > | > 388 return PyFloat_FromDouble(a); > | > 389 } > | > 390 > | > 391 static PyObject * > | > 392 float_div(PyObject *v, PyObject *w) > | > (dbx) t > | > > 0 float_mul(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/floatobject.c":383, 0x1004c158] > | > 1 binary_op1(0x100b69fc, 0x10116788, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":337, 0x1003ac5c] > | > 2 binary_op(0x100b69fc, 0x10116788, 0x8, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":373, 0x1003ae70] > | > 3 PyNumber_Multiply(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Objects/abstract.c":544, 0x1003b5a4] > | > 4 eval_code2(0x1012c688, 0x0, 0xffffffec, 0x0, 0x0, 0x0, 0x0, 0x0) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":896, 0x10034a54] > | > 5 PyEval_EvalCode(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/ceval.c":336, 0x10031768] > | > 6 run_node(0x100f88c0, 0x10116788, 0x0, 0x0, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":931, 0x10040444] > | > 7 PyRun_InteractiveOne(0x0, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":540, 0x1003f1f0] > | > 8 PyRun_InteractiveLoop(0xfb4a398, 0x100b1878, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":486, 0x1003ef84] > | > 9 PyRun_AnyFileEx(0xfb4a398, 0x100b1878, 0x0, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Python/pythonrun.c":461, 0x1003eeac] > | > 10 Py_Main(0x1, 0x0, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/main.c":292, 0x1000bba4] > | > 11 main(0x100b69fc, 0x10116788, 0x8, 0x100a1318, 0x10050000, 0x10116788, 0x100a1318, 0x100a1290) ["/usr/u0/tommy/pycvs/python/dist/src/Modules/python.c":10, 0x1000b7bc] > | > More (n if no)?y > | > 12 __start() ["/xlv55/kudzu-apr12/work/irix/lib/libc/libc_n32_M4/csu/crt1text.s":177, 0x1000b558] > | > (dbx) > | > > | > _______________________________________________ > | > Python-Dev mailing list > | > Python-Dev at python.org > | > http://mail.python.org/mailman/listinfo/python-dev > | > > | > | -- Sjoerd Mullender > -- Sjoerd Mullender From tim.one at home.com Thu Feb 15 10:07:38 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 15 Feb 2001 04:07:38 -0500 Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: Message-ID: [Ka-Ping Yee] > ... > The only exceptions that don't currently conform, as far as i > know, have to do with invalid literals. Pretty much, but nothing's *that* easy. Other examples: + If there are too many nested blocks, it raises SystemError(!). + MemoryError is raised if a dotted name is too long. + OverflowError is raised if a string is too long. Note that those don't have to do with syntax, they're arbitrary implementation limits. So that's the rule: raise SystemError if something is bigger than 20 MemoryError if it's bigger than 1000 OverflowError if it's bigger than an int Couldn't be clearer . + SystemErrors are raised in many other places in the role of internal assertions failing. Those needn't be changed. From andy at reportlab.com Thu Feb 15 11:07:11 2001 From: andy at reportlab.com (Andy Robinson) Date: Thu, 15 Feb 2001 10:07:11 -0000 Subject: [Python-Dev] Documentation Tools (was Unit Testing) In-Reply-To: Message-ID: Moshe Zadka write: > As someone who works primarily in Perl nowadays, and hates > it, I must say > that as horrible and unaesthetic pod is, having > > perldoc package::module > > Just work is worth everything -- [snip] > We had a DevDay, we have a sig, we have a PEP. None of this > seems to help -- > what we need is a BDFL's pronouncement, even if it's on the > worst solution > possibly imaginable. ReportLab have just hired Dinu Gherman to work on this. We have crude running solutions of our own that do both HTML+Bitmap and PDF on any package, and are devoting considerable resources to an automatic documentation tool. In fact, it's part of a deliverable for a customer project this spring. We need both these PEPs or something like them for this to really fly. Dinu will be at IPC9 and happy to discuss this, and we have the resources to do trial implementations for the BDFL to consider. I suggest anyone interested contacts Dinu at the address above. And Dinu, why don't you contact the doc-sig administrator and find out why your membership is blocked :-) - Andy Robinson From mwh21 at cam.ac.uk Thu Feb 15 15:45:18 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 15 Feb 2001 14:45:18 +0000 Subject: [Python-Dev] python-dev summaries? In-Reply-To: "Tim Peters"'s message of "Thu, 15 Feb 2001 03:00:49 -0500" References: Message-ID: "Tim Peters" writes: > [Michael Hudson, graduates from bytecodes to ASCII art] > > ... > > If noone else is planning on doing a sumamry, I'll post a draft for > > comments sometime tomorrow. > > 1. If you solicit comments, it will be 3 months of debate before > you get to post the thing <0.8 wink>. Just Do It. Well, I'm not quite brave enough for that. Here's what I've written; spelling & grammar flames appreciated! You've got a couple of hours before I post it to all the other places... It is with some trepidation that I post: This is a summary of traffic on the python-dev mailing list between Feb 1 and Feb 14 2001. It is intended to inform the wider Python community of ongoing developments. To comment, just post to python-list at python.org or comp.lang.python in the usual way. Give your posting a meaningful subject line, and if it's about a PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep iteration) All python-dev members are interested in seeing ideas discussed by the community, so don't hesitate to take a stance on a PEP if you have an opinion. This is the first python-dev summary written by Michael Hudson. Previous summaries were written by Andrew Kuchling and can be found at: New summaries will probably appear at: When I get round to it. Posting distribution (with apologies to mbm) Number of articles in summary: 498 80 | ]|[ | ]|[ | ]|[ | ]|[ | ]|[ ]|[ 60 | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ 40 | ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 20 | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 0 +-029-067-039-037-080-048-020-009-040-021-008-030-043-027 Thu 01| Sat 03| Mon 05| Wed 07| Fri 09| Sun 11| Tue 13| Fri 02 Sun 04 Tue 06 Thu 08 Sat 10 Mon 12 Wed 14 A fairly busy fortnight on python-dev, falling just short of five hundred articles. Much of this is making ready for the Python 2.1 release, but people's horizons are beginning to rise above the present. * Python 2.1a2 * Python 2.1a2 was released on Feb. 2. One of the more controversial changes was the disallowing of "from module import *" at anything other than module level; this restriction was weakened after some slightly heated discussion on comp.lang.python. It is possible that non-module-level "from module import *" will produce some kind of warning in Python 2.1 but this code has not yet been written. * Performance * Almost two weeks ago, we were talking about performance. Michael Hudson posted the results of an extended benchmarking session using Marc-Andre Lemburg's pybench suite: to which the conclusion was that python 2.1 will be marginally slower than python 2.0, but it's not worth shouting about. The use of Vladimir Marangoz's obmalloc patch in some of the benchmarks sparked a discussion about whether this patch should be incorporated into Python 2.1. There was support from many for adding it on an opt-in basis, since when nothing has happened... * Imports on case-insensitive file systems * There was quite some discussion about how to handle imports on a case-insensitive file system (eg. on Windows). I didn't follow the details, but Tim Peters is on the case (sorry), so I'm confident it will get sorted out. * Sets & iterators * The Sets discussion rumbled on, moving into areas of syntax. The syntax: for key:value in dict: was proposed. Discussion went round and round for a while and moved on to more general iteration constructs, prompting Ka-Ping Yee to write a PEP entitled "iterators": Please comment! Greg Wilson announced that BOFs for both sets and iterators have been arranged at the python9 conference in March: * Stackless Python in Korea * Christian Tismer gave a presentation on stackless python to over 700 Korean pythonistas: I think almost everyone was amazed and delighted to find that Python has such a fan base. Next stop, the world! * string methodizing the standard library * Eric Raymond clearly got bored one evening and marched through the standard library, converting almost all uses of the string module to use to equivalent string method. * Python's release schedule * Skip Montanero raised some concerns about Python's accelerated release schedule, and it was pointed out that the default Python for both debian unstable and Redhat 7.1 beta was still 1.5.2. Have *you* upgraded to Python 2.0? If not, why not? * Unit testing (again) * The question of replacing Python's hoary old regrtest-driven test suite with something more modern came up again. Andrew Kuchling enquired whether the issue was to be decided by voting or BDFL fiat: Guido obliged: There was then some discussion of what changes people would like to see made in the standard-Python-unit-testing-framework-elect (PyUnit) before they would be happy with it. Cheers, M. -- Or here's an even simpler indicator of how much C++ sucks: Print out the C++ Public Review Document. Have someone hold it about three feet above your head and then drop it. Thus you will be enlightened. -- Thant Tessman From akuchlin at cnri.reston.va.us Thu Feb 15 15:52:49 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Thu, 15 Feb 2001 09:52:49 -0500 Subject: [Python-Dev] python-dev summaries? In-Reply-To: ; from mwh21@cam.ac.uk on Thu, Feb 15, 2001 at 02:45:18PM +0000 References: Message-ID: <20010215095248.A5827@thrak.cnri.reston.va.us> On Thu, Feb 15, 2001 at 02:45:18PM +0000, Michael Hudson wrote: > use to equivalent string method. > > * Python's release schedule * I think an extra blank line before the section headings would separate the sections more clearly. > Skip Montanero raised some concerns about Python's accelerated ^^^^^^^^^ Montanaro Beyond those two things, great work! I say post it. (Don't forget to send copies to lwn at lwn.net and editors at linuxtoday.com.) Also, is it OK with you if I begin adding these summaries to the archive at www.amk.ca/python/dev/, suitably credited? --amk From guido at digicool.com Thu Feb 15 15:51:53 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 09:51:53 -0500 Subject: [Python-Dev] SyntaxError for illegal literals In-Reply-To: Your message of "Thu, 15 Feb 2001 04:07:38 EST." References: Message-ID: <200102151451.JAA29642@cj20424-a.reston1.va.home.com> > [Ka-Ping Yee] > > ... > > The only exceptions that don't currently conform, as far as i > > know, have to do with invalid literals. [Tim] > Pretty much, but nothing's *that* easy. > > Other examples: > > + If there are too many nested blocks, it raises SystemError(!). > > + MemoryError is raised if a dotted name is too long. > > + OverflowError is raised if a string is too long. > > Note that those don't have to do with syntax, they're arbitrary > implementation limits. So that's the rule: raise > > SystemError if something is bigger than 20 > MemoryError if it's bigger than 1000 > OverflowError if it's bigger than an int > > Couldn't be clearer . > > + SystemErrors are raised in many other places in the role of internal > assertions failing. Those needn't be changed. Note that MemoryErrors are also raised whenever new objects are created, which happens all the time during the course of compilation (both Jeremy's symbol table code and of course code objects). These needn't be changed either. --Guido van Rossum (home page: http://www.python.org/~guido/) From mwh21 at cam.ac.uk Thu Feb 15 17:20:48 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 15 Feb 2001 16:20:48 +0000 Subject: [Python-Dev] python-dev summaries? In-Reply-To: Andrew Kuchling's message of "Thu, 15 Feb 2001 09:52:49 -0500" References: <20010215095248.A5827@thrak.cnri.reston.va.us> Message-ID: Andrew Kuchling writes: > On Thu, Feb 15, 2001 at 02:45:18PM +0000, Michael Hudson wrote: > > use to equivalent string method. > > > > * Python's release schedule * > > I think an extra blank line before the section headings would separate > the sections more clearly. > > > Skip Montanero raised some concerns about Python's accelerated > ^^^^^^^^^ Montanaro > > Beyond those two things, great work! I say post it. (Don't forget to > send copies to lwn at lwn.net and editors at linuxtoday.com.) Thanks! I meant to check Skip's name (duh! sorry!). Changes made. > Also, is it OK with you if I begin adding these summaries to the > archive at www.amk.ca/python/dev/, suitably credited? Yeah, sure. I was going to stick them on my pages, but it probably makes more sense to keep them where people already look for them. Do you want me to send you the html-ized version I've cobbled together? (and got to validate as xhtml 1.0 strict...). Cheers, M. -- 48. The best book on programming for the layman is "Alice in Wonderland"; but that's because it's the best book on anything for the layman. -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html From mwh21 at cam.ac.uk Thu Feb 15 17:55:35 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: Thu, 15 Feb 2001 16:55:35 +0000 (GMT) Subject: [Python-Dev] python-dev summary, 2001-02-01 - 2001-02-15 Message-ID: It is with some trepidation that I post: This is a summary of traffic on the python-dev mailing list between Feb 1 and Feb 14 2001. It is intended to inform the wider Python community of ongoing developments. To comment, just post to python-list at python.org or comp.lang.python in the usual way. Give your posting a meaningful subject line, and if it's about a PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep iteration) All python-dev members are interested in seeing ideas discussed by the community, so don't hesitate to take a stance on a PEP if you have an opinion. This is the first python-dev summary written by Michael Hudson. Previous summaries were written by Andrew Kuchling and can be found at: New summaries will probably appear at: When I get round to it. Posting distribution (with apologies to mbm) Number of articles in summary: 498 80 | ]|[ | ]|[ | ]|[ | ]|[ | ]|[ ]|[ 60 | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ | ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ 40 | ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 20 | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ 0 +-029-067-039-037-080-048-020-009-040-021-008-030-043-027 Thu 01| Sat 03| Mon 05| Wed 07| Fri 09| Sun 11| Tue 13| Fri 02 Sun 04 Tue 06 Thu 08 Sat 10 Mon 12 Wed 14 A fairly busy fortnight on python-dev, falling just short of five hundred articles. Much of this is making ready for the Python 2.1 release, but people's horizons are beginning to rise above the present. * Python 2.1a2 * Python 2.1a2 was released on Feb. 2. One of the more controversial changes was the disallowing of "from module import *" at anything other than module level; this restriction was weakened after some slightly heated discussion on comp.lang.python. It is possible that non-module-level "from module import *" will produce some kind of warning in Python 2.1 but this code has not yet been written. * Performance * Almost two weeks ago, we were talking about performance. Michael Hudson posted the results of an extended benchmarking session using Marc-Andre Lemburg's pybench suite: to which the conclusion was that python 2.1 will be marginally slower than python 2.0, but it's not worth shouting about. The use of Vladimir Marangoz's obmalloc patch in some of the benchmarks sparked a discussion about whether this patch should be incorporated into Python 2.1. There was support from many for adding it on an opt-in basis, since when nothing has happened... * Imports on case-insensitive file systems * There was quite some discussion about how to handle imports on a case-insensitive file system (eg. on Windows). I didn't follow the details, but Tim Peters is on the case (sorry), so I'm confident it will get sorted out. * Sets & iterators * The Sets discussion rumbled on, moving into areas of syntax. The syntax: for key:value in dict: was proposed. Discussion went round and round for a while and moved on to more general iteration constructs, prompting Ka-Ping Yee to write a PEP entitled "iterators": Please comment! Greg Wilson announced that BOFs for both sets and iterators have been arranged at the python9 conference in March: * Stackless Python in Korea * Christian Tismer gave a presentation on stackless python to over 700 Korean pythonistas: I think almost everyone was amazed and delighted to find that Python has such a fan base. Next stop, the world! * string methodizing the standard library * Eric Raymond clearly got bored one evening and marched through the standard library, converting almost all uses of the string module to use to equivalent string method. * Python's release schedule * Skip Montanaro raised some concerns about Python's accelerated release schedule, and it was pointed out that the default Python for both debian unstable and Redhat 7.1 beta was still 1.5.2. Have *you* upgraded to Python 2.0? If not, why not? * Unit testing (again) * The question of replacing Python's hoary old regrtest-driven test suite with something more modern came up again. Andrew Kuchling enquired whether the issue was to be decided by voting or BDFL fiat: Guido obliged: There was then some discussion of what changes people would like to see made in the standard-Python-unit-testing-framework-elect (PyUnit) before they would be happy with it. Cheers, M. From moshez at zadka.site.co.il Thu Feb 15 19:15:32 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Thu, 15 Feb 2001 20:15:32 +0200 (IST) Subject: [Python-Dev] Documentation Tools (was Unit Testing) In-Reply-To: References: Message-ID: <20010215181532.C7D2AA840@darjeeling.zadka.site.co.il> On Thu, 15 Feb 2001 10:07:11 -0000, "Andy Robinson" wrote: > We need both these PEPs or something like them for this > to really fly. If Dinu wants to take over the PEP, it's fine by me. If Dinu wants me to keep the PEP, I'll be happy to work with him. > Dinu will be at IPC9 and happy to discuss > this Happy to talk to him, but *please* don't make it into a DevDay/BoF/something formal. We had one at IPC8, which merely served to waste time. Again, I reiterate my opinion: there will never be a consensus in doc-sig. It doesn't matter -- a horrible standard format is better then what we have today. -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez at debian.org |http://www.{python,debian,gnu}.org From ping at lfw.org Thu Feb 15 20:36:10 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 15 Feb 2001 11:36:10 -0800 (PST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010214164717.24AA1A840@darjeeling.zadka.site.co.il> Message-ID: On Wed, 14 Feb 2001, Moshe Zadka wrote: > As someone who works primarily in Perl nowadays, and hates it, I must say > that as horrible and unaesthetic pod is, having > > perldoc package::module > > Just work is worth everything -- I've marked everything I wrote that way, > and I can't begin to explain how much it helps. I agree that this is important. > We had a DevDay, we have a sig, we have a PEP. None of this seems to help -- What are you talking about? There is an implementation and it works. I demonstrated the HTML one back at Python 8, and now there is a text-generating one in the CVS tree. -- ?!ng From mal at lemburg.com Thu Feb 15 23:20:45 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 15 Feb 2001 23:20:45 +0100 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) References: Message-ID: <3A8C563D.D9BB6E3E@lemburg.com> Michael Hudson wrote: > > The use > of Vladimir Marangoz's obmalloc patch in some of the benchmarks > sparked a discussion about whether this patch should be incorporated > into Python 2.1. There was support from many for adding it on an > opt-in basis, since when nothing has happened... ... I'm still waiting on BDFL pronouncement on this one. The plan was to check it in for beta1 on an opt-in basis (Vladimir has written the patch this way). -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From fredrik at effbot.org Thu Feb 15 23:40:03 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 15 Feb 2001 23:40:03 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) References: Message-ID: <000801c097a0$41397520$e46940d5@hagrid> can anyone explain why it's a good idea to have totally incomprehensible stuff like __all__ = locals().keys() for _i in range(len(__all__)-1,-1,-1): if __all__[_i][0] == "_": del __all__[_i] del _i in my code? Annoyed /F From skip at mojam.com Fri Feb 16 00:13:09 2001 From: skip at mojam.com (Skip Montanaro) Date: Thu, 15 Feb 2001 17:13:09 -0600 (CST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <000801c097a0$41397520$e46940d5@hagrid> References: <000801c097a0$41397520$e46940d5@hagrid> Message-ID: <14988.25221.294028.413733@beluga.mojam.com> Fredrik> can anyone explain why it's a good idea to have totally Fredrik> incomprehensible stuff like Fredrik> __all__ = locals().keys() Fredrik> for _i in range(len(__all__)-1,-1,-1): Fredrik> if __all__[_i][0] == "_": Fredrik> del __all__[_i] Fredrik> del _i Fredrik> in my code? Please don't shoot the messenger... ;-) In modules that looked to me to contain nothing by constants, I used the above technique to simply load all the modules symbols into __all__, then delete any that began with an underscore. If there is no reason to have an __all__ list for such modules, feel free to remove the code, just remember to also delete the check_all() call in Lib/test/test___all__.py. Skip From guido at digicool.com Fri Feb 16 00:28:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 18:28:03 -0500 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: Your message of "Thu, 15 Feb 2001 23:20:45 +0100." <3A8C563D.D9BB6E3E@lemburg.com> References: <3A8C563D.D9BB6E3E@lemburg.com> Message-ID: <200102152328.SAA32032@cj20424-a.reston1.va.home.com> > Michael Hudson wrote: > > > > The use > > of Vladimir Marangoz's obmalloc patch in some of the benchmarks > > sparked a discussion about whether this patch should be incorporated > > into Python 2.1. There was support from many for adding it on an > > opt-in basis, since when nothing has happened... > > ... I'm still waiting on BDFL pronouncement on this one. The plan > was to check it in for beta1 on an opt-in basis (Vladimir has written > the patch this way). > > -- > Marc-Andre Lemburg If it is truly opt-in (supposedly a configure option?), I'm all for it. I recall vaguely though that Jeremy or Tim thought that the patch touches lots of code even when one doesn't opt in. That was a no-no so close before the a2 release. Anybody who actually looked at the code got an opinion on that now? The b1 release is planned for March 1st, or exactly two weeks! --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Fri Feb 16 00:34:31 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 15 Feb 2001 18:34:31 -0500 (EST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <14988.25221.294028.413733@beluga.mojam.com> References: <000801c097a0$41397520$e46940d5@hagrid> <14988.25221.294028.413733@beluga.mojam.com> Message-ID: <14988.26503.13571.878316@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro writes: Fredrik> can anyone explain why it's a good idea to have totally Fredrik> incomprehensible stuff like Fredrik> __all__ = locals().keys() for _i in Fredrik> range(len(__all__)-1,-1,-1): if __all__[_i][0] == "_": del Fredrik> __all__[_i] del _i Fredrik> in my code? SM> Please don't shoot the messenger... ;-) SM> In modules that looked to me to contain nothing by constants, I SM> used the above technique to simply load all the modules symbols SM> into __all__, then delete any that began with an underscore. If SM> there is no reason to have an __all__ list for such modules, SM> feel free to remove the code, just remember to also delete the SM> check_all() call in Lib/test/test___all__.py. If __all__ is needed (still not sure what it's for :-), wouldn't the following one-liner be clearer: __all__ = [name for name in locals.keys() if not name.startswith('_')] Jeremy From guido at digicool.com Fri Feb 16 00:38:04 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 18:38:04 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: Your message of "Thu, 15 Feb 2001 23:40:03 +0100." <000801c097a0$41397520$e46940d5@hagrid> References: <000801c097a0$41397520$e46940d5@hagrid> Message-ID: <200102152338.SAA32099@cj20424-a.reston1.va.home.com> > can anyone explain why it's a good idea to have totally > incomprehensible stuff like > > __all__ = locals().keys() > for _i in range(len(__all__)-1,-1,-1): > if __all__[_i][0] == "_": > del __all__[_i] > del _i > > in my code? Ask Skip. :-) This doesn't exclude anything that would be included in import* by default, so I'm not sure I see the point either. As for clarity, it would've been nice if there was a comment. If it is decided that it's a good idea to have __all__ even when it doesn't add any new information (I'm not so sure), here's a cleaner way to spell it, which also gets the names in alphabetical order: # Set __all__ to the list of global names not starting with underscore: __all__ = filter(lambda s: s[0]!='_', dir()) --Guido van Rossum (home page: http://www.python.org/~guido/) From mwh21 at cam.ac.uk Fri Feb 16 00:40:49 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 15 Feb 2001 23:40:49 +0000 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: Guido van Rossum's message of "Thu, 15 Feb 2001 18:28:03 -0500" References: <3A8C563D.D9BB6E3E@lemburg.com> <200102152328.SAA32032@cj20424-a.reston1.va.home.com> Message-ID: Guido van Rossum writes: > > Michael Hudson wrote: > > > > > > The use > > > of Vladimir Marangoz's obmalloc patch in some of the benchmarks > > > sparked a discussion about whether this patch should be incorporated > > > into Python 2.1. There was support from many for adding it on an > > > opt-in basis, since when nothing has happened... > > > > ... I'm still waiting on BDFL pronouncement on this one. The plan > > was to check it in for beta1 on an opt-in basis (Vladimir has written > > the patch this way). > > > > -- > > Marc-Andre Lemburg > > If it is truly opt-in (supposedly a configure option?), I'm all for > it. It is very much opt-in. > I recall vaguely though that Jeremy or Tim thought that the patch > touches lots of code even when one doesn't opt in. That was a no-no > so close before the a2 release. Anybody who actually looked at the > code got an opinion on that now? I suggest looking at the patch. Not at the code, but what it does as a diff: 1) Add a file Objects/obmalloc.c 2) Add stuff to configure.in & config.h to detect the --with-pymalloc argument to ./configure 3) Conditionally #include "obmalloc.h" in Objects/object.c if WITH_PYMALLOC is #defined 4) Conditionally #define the variables in Include/objimpl.h to #define the #defines needed to override the memory imiplementation if WITH_PYMALLOC is #defined And *that's it*. That's not my definition of "touches a lot of code". Cheers, M. -- Or here's an even simpler indicator of how much C++ sucks: Print out the C++ Public Review Document. Have someone hold it about three feet above your head and then drop it. Thus you will be enlightened. -- Thant Tessman From greg at cosc.canterbury.ac.nz Fri Feb 16 00:41:53 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Fri, 16 Feb 2001 12:41:53 +1300 (NZDT) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <000801c097a0$41397520$e46940d5@hagrid> Message-ID: <200102152341.MAA06568@s454.cosc.canterbury.ac.nz> Fredrik Lundh : > for _i in range(len(__all__)-1,-1,-1): On a slightly wider topic, it might be nice to have a clearer way of iterating backwards over a range. How about a function such as revrange(n1, n2) which would produce the same sequence of numbers as range(n1, n2) but in the opposite order. (Plus corresponding xrevrange() of course.) Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From guido at digicool.com Fri Feb 16 00:45:54 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 18:45:54 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: Your message of "Thu, 15 Feb 2001 17:13:09 CST." <14988.25221.294028.413733@beluga.mojam.com> References: <000801c097a0$41397520$e46940d5@hagrid> <14988.25221.294028.413733@beluga.mojam.com> Message-ID: <200102152345.SAA32204@cj20424-a.reston1.va.home.com> > Fredrik> can anyone explain why it's a good idea to have totally > Fredrik> incomprehensible stuff like > > Fredrik> __all__ = locals().keys() > Fredrik> for _i in range(len(__all__)-1,-1,-1): > Fredrik> if __all__[_i][0] == "_": > Fredrik> del __all__[_i] > Fredrik> del _i > > Fredrik> in my code? > > Please don't shoot the messenger... ;-) I'm not sure you qualify as the messenger, Skip. You seem to be taking this __all__ thing way beyond where I thought it needed to go. > In modules that looked to me to contain nothing by constants, I used the > above technique to simply load all the modules symbols into __all__, then > delete any that began with an underscore. If there is no reason to have an > __all__ list for such modules, feel free to remove the code, just remember > to also delete the check_all() call in Lib/test/test___all__.py. Rhetorical question: why do we have __all__? In my mind we have it so that "from M import *" doesn't import spurious stuff that happens to be a global in M but isn't really intended for export from M. Typical example: Tkinter is commonly used in "from Tkinter import *" mode, but accidentally exports a few standard modules like sys. Adding __all__ just for the sake of having __all__ defined doesn't seem to me a good use of anybody's time; since "from M import *" already skips names starting with '_', there's no reason to have __all__ defined in modules where it is computed to be exactly the globals that don't start with '_'... Also, it's not immediately clear what test___all__.py tests. It seems that it just checks that the __all__ attribute exists and then that "from M import *" imports exactly the names in __all__. Since that's how it's implemented, what does this really test? I guess it tests that the import mechanism doesn't screw up. It could screw up if it was replaced by a custom import hack that hasn't been taught to look for __all__ yet, for example, and it's useful if this is caught. But why do we need to import every module under the sun that happens to define __all__ to check that? --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Fri Feb 16 00:48:01 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 18:48:01 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: Your message of "Thu, 15 Feb 2001 18:34:31 EST." <14988.26503.13571.878316@w221.z064000254.bwi-md.dsl.cnc.net> References: <000801c097a0$41397520$e46940d5@hagrid> <14988.25221.294028.413733@beluga.mojam.com> <14988.26503.13571.878316@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102152348.SAA32223@cj20424-a.reston1.va.home.com> > If __all__ is needed (still not sure what it's for :-), wouldn't the > following one-liner be clearer: > > __all__ = [name for name in locals.keys() if not name.startswith('_')] But that shouldn't be used in /F's modules, because he wants them to be 1.5 compatible. Anyway, filter(lambda s: s[0]!='_', dir()) is shorter, and you prove that it isn't faster. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Fri Feb 16 00:53:46 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 15 Feb 2001 18:53:46 -0500 (EST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <200102152348.SAA32223@cj20424-a.reston1.va.home.com> References: <000801c097a0$41397520$e46940d5@hagrid> <14988.25221.294028.413733@beluga.mojam.com> <14988.26503.13571.878316@w221.z064000254.bwi-md.dsl.cnc.net> <200102152348.SAA32223@cj20424-a.reston1.va.home.com> Message-ID: <14988.27658.989073.771498@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum writes: >> If __all__ is needed (still not sure what it's for :-), wouldn't >> the following one-liner be clearer: >> >> __all__ = [name for name in locals.keys() if not >> name.startswith('_')] GvR> But that shouldn't be used in /F's modules, because he wants GvR> them to be 1.5 compatible. Anyway, filter(lambda s: s[0]!='_', GvR> dir()) is shorter, and you prove that it isn't faster. :-) Well, if he wants it to work with 1.5.2, that's one thing. But the list comprehensions is clear are short done your way: __all__ = [s for s in dir() if s[0] != '_'] Jeremy From guido at digicool.com Fri Feb 16 00:54:12 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 18:54:12 -0500 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: Your message of "15 Feb 2001 23:40:49 GMT." References: <3A8C563D.D9BB6E3E@lemburg.com> <200102152328.SAA32032@cj20424-a.reston1.va.home.com> Message-ID: <200102152354.SAA32281@cj20424-a.reston1.va.home.com> > > If it is truly opt-in (supposedly a configure option?), I'm all for > > it. > > It is very much opt-in. > > > I recall vaguely though that Jeremy or Tim thought that the patch > > touches lots of code even when one doesn't opt in. That was a no-no > > so close before the a2 release. Anybody who actually looked at the > > code got an opinion on that now? > > I suggest looking at the patch. Not at the code, but what it does as > a diff: > > 1) Add a file Objects/obmalloc.c > 2) Add stuff to configure.in & config.h to detect the --with-pymalloc > argument to ./configure > 3) Conditionally #include "obmalloc.h" in Objects/object.c if > WITH_PYMALLOC is #defined > 4) Conditionally #define the variables in Include/objimpl.h to #define > the #defines needed to override the memory imiplementation if > WITH_PYMALLOC is #defined > > And *that's it*. That's not my definition of "touches a lot of code". OK, I just looked, and I agree. BTW, for those who want to look, the URL is: http://sourceforge.net/patch/?func=detailpatch&patch_id=101104&group_id=5470 This is currently assigned to Barry. Barry, can you see if this is truly fit for inclusion? Or am I missing something? Note that there's a companion patch that adds a memory profiler: http://sourceforge.net/patch/?func=detailpatch&patch_id=101229&group_id=5470 Should this also be applied? Is there a reason why it shouldn't? --Guido van Rossum (home page: http://www.python.org/~guido/) From tim_one at email.msn.com Fri Feb 16 01:04:32 2001 From: tim_one at email.msn.com (Tim Peters) Date: Thu, 15 Feb 2001 19:04:32 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <000801c097a0$41397520$e46940d5@hagrid> Message-ID: [/F] > can anyone explain why it's a good idea to have totally > incomprehensible stuff like > > __all__ = locals().keys() > for _i in range(len(__all__)-1,-1,-1): > if __all__[_i][0] == "_": > del __all__[_i] > del _i > > in my code? I'm unclear on why __all__ was introduced, but if it's gonna be there I'd suggest: __all__ = [k for k in dir() if k[0] not in "_["] del k If anyone was exporting the name "k", they should be shot anyway . Oh, ya, "[" has to be excluded because the listcomp itself temporarily creates an artificial name beginning with "[". >>> [k for k in dir()] ['[1]', '__builtins__', '__doc__', '__name__'] ^^^^^ >>> dir() # but now it's gone ['__builtins__', '__doc__', '__name__', 'k'] >>> From guido at digicool.com Fri Feb 16 01:12:33 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 15 Feb 2001 19:12:33 -0500 Subject: [Python-Dev] Re: whitespace normalization In-Reply-To: Your message of "Thu, 15 Feb 2001 15:56:41 PST." References: Message-ID: <200102160012.TAA32395@cj20424-a.reston1.va.home.com> Tim, I've seen a couple of checkins lately from you like this: > Modified Files: > random.py robotparser.py > Log Message: > Whitespace normalization. Apparently you watch checkins to the std library and run reindent on changed modules occasionally. Would it make sense to check in a test case into the test suite that verifies that all std modules are reindent fixpoints, so that whoever changes a module gets a chance to catch this before they check in? --Guido van Rossum (home page: http://www.python.org/~guido/) From tim_one at email.msn.com Fri Feb 16 01:25:26 2001 From: tim_one at email.msn.com (Tim Peters) Date: Thu, 15 Feb 2001 19:25:26 -0500 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: <200102152328.SAA32032@cj20424-a.reston1.va.home.com> Message-ID: [Tim] > If it is truly opt-in (supposedly a configure option?), I'm all for > it. I recall vaguely though that Jeremy or Tim thought that the patch > touches lots of code even when one doesn't opt in. Nope, not us. The patch is utterly harmless if not enabled, but dangerous if enabled (because it doesn't implement any critical sections -- see gobs of pre-release email about that). From tim_one at email.msn.com Fri Feb 16 01:38:00 2001 From: tim_one at email.msn.com (Tim Peters) Date: Thu, 15 Feb 2001 19:38:00 -0500 Subject: [Python-Dev] Re: whitespace normalization In-Reply-To: <200102160012.TAA32395@cj20424-a.reston1.va.home.com> Message-ID: Your @Home email is working?! I'm back on MSN. @Home is up, but times out on almost everything for me. > I've seen a couple of checkins lately from you like this: > > > Modified Files: > > random.py robotparser.py > > Log Message: > > Whitespace normalization. > > Apparently you watch checkins to the std library and run reindent on > changed modules occasionally. I run reindent on *all* std Library modules once or twice a week: if a file is a reindent fixed-point, reindent leaves it entirely alone, so no spurious checkins are generated. That is, reindent saves "before" and "after" versions of the entire module in memory, and doesn't even write a new file if before == after. > Would it make sense to check in a test case into the test suite that > verifies that all std modules are reindent fixpoints, so that whoever > changes a module gets a chance to catch this before they check in? Don't think it's worth the bother: running reindent over everything in Lib/ takes well over 10 seconds on my 866MHz box, so it would end up getting skipped by people anway. More suitable for an infrequent cron job, yes? From tim_one at email.msn.com Fri Feb 16 01:44:53 2001 From: tim_one at email.msn.com (Tim Peters) Date: Thu, 15 Feb 2001 19:44:53 -0500 Subject: [Python-Dev] Re: whitespace normalization In-Reply-To: <200102160012.TAA32395@cj20424-a.reston1.va.home.com> Message-ID: > I've seen a couple of checkins lately from you like this: > > > Modified Files: > > random.py robotparser.py > > Log Message: > > Whitespace normalization. > > Apparently you watch checkins to the std library and run reindent on > changed modules occasionally. I run reindent on *all* std Library modules once or twice a week: if a file is a reindent fixed-point, reindent leaves it entirely alone, so no spurious checkins are generated. That is, reindent saves "before" and "after" versions of the entire module in memory, and doesn't even write a new file if before == after. > Would it make sense to check in a test case into the test suite that > verifies that all std modules are reindent fixpoints, so that whoever > changes a module gets a chance to catch this before they check in? Don't think it's worth the bother: running reindent over everything in Lib/ takes well over 10 seconds on my 866MHz box, so it would end up getting skipped by people anway. More suitable for an infrequent cron job, yes? BTW, there are still many Python files in the std distribution that haven't been run thru reindent yet. For example, I'm uncomfortable doing anything in Lib/plat-irix6, etc: don't have the platform, and no test suite anyway. Put out a call for others to clean up directories they care about, but nobody bit. From skip at mojam.com Fri Feb 16 02:05:49 2001 From: skip at mojam.com (Skip Montanaro) Date: Thu, 15 Feb 2001 19:05:49 -0600 (CST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <200102152345.SAA32204@cj20424-a.reston1.va.home.com> References: <000801c097a0$41397520$e46940d5@hagrid> <14988.25221.294028.413733@beluga.mojam.com> <200102152345.SAA32204@cj20424-a.reston1.va.home.com> Message-ID: <14988.31981.365476.245762@beluga.mojam.com> Guido> Adding __all__ just for the sake of having __all__ defined Guido> doesn't seem to me a good use of anybody's time; since "from M Guido> import *" already skips names starting with '_', there's no Guido> reason to have __all__ defined in modules where it is computed to Guido> be exactly the globals that don't start with '_'... Sounds fine by me. I'll remove it from any modules like sre_constants that don't import anything else. Guido> Also, it's not immediately clear what test___all__.py tests. hmmm... There was a reason. If I think about it long enough I may actually remember what it was. I definitely needed it for the first few modules to make sure I was doing things right. I eventually got into this mechanical mode of adding __all__ lists, then adding a check_all call to the test___all__ module. In cases where I didn't construct __all__ correctly (say, somehow wound up with two copies of "xyz" in the list) it caught that. Okay, so I'm back to the drawing board on this. The rationale for defining __all__ is to prevent namespace pollution when someone executes an import *. I guess definition of __all__ should be restricted to modules that import other modules and don't explictly take other pains to clean up their namespace. I suspect test___all__.py could/should be removed as well. Skip From skip at mojam.com Fri Feb 16 02:10:37 2001 From: skip at mojam.com (Skip Montanaro) Date: Thu, 15 Feb 2001 19:10:37 -0600 (CST) Subject: [Python-Dev] Re: whitespace normalization In-Reply-To: References: <200102160012.TAA32395@cj20424-a.reston1.va.home.com> Message-ID: <14988.32269.199812.169538@beluga.mojam.com> Tim> Don't think it's worth the bother: running reindent over everything Tim> in Lib/ takes well over 10 seconds on my 866MHz box, so it would Tim> end up getting skipped by people anway. More suitable for an Tim> infrequent cron job, yes? On Unix at least, you could simply eliminate it from the quicktest target to speed up most test runs. Dunno how you'd avoid executing it on other platforms. S From barry at digicool.com Fri Feb 16 05:12:04 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Thu, 15 Feb 2001 23:12:04 -0500 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) References: <3A8C563D.D9BB6E3E@lemburg.com> <200102152328.SAA32032@cj20424-a.reston1.va.home.com> <200102152354.SAA32281@cj20424-a.reston1.va.home.com> Message-ID: <14988.43156.191949.342241@anthem.wooz.org> >>>>> "GvR" == Guido van Rossum writes: GvR> This is currently assigned to Barry. Barry, can you see if GvR> this is truly fit for inclusion? Or am I missing something? I think I was wary of applying it without the chance to run it through Insure when it was enabled. I can put that on my list of things to do for beta1. -Barry From tim.one at home.com Fri Feb 16 06:59:42 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 00:59:42 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: Message-ID: [Moshe Zadka] > We had a DevDay, we have a sig, we have a PEP. None of this > seems to help -- [Ka-Ping Yee] > What are you talking about? There is an implementation and it works. There are many implementations "that work". But we haven't picked one. What's the standard markup for Python docstrings? There isn't! That's what he's talking about. This is especially bizarre because it's been clear for *years* that some variant of structured text would win in the end, but nobody playing the game likes all the details of anyone else's set of (IMO, all overly elaborate) conventions, so the situation for users is no better now than it was the day docstrings were added. Tibs's latest (and ongoing) attempt to reach a consensus can be found here: http://www.tibsnjoan.demon.co.uk/docutils/STpy.html The status of its implementation here: http://www.tibsnjoan.demon.co.uk/docutils/status.html Not close yet. In the meantime, Perlers have been "suffering" with a POD spec about 3% the size of the proposed Python spec; I guess their only consolation is that POD docs have been in universal use for years . while-ours-is-that-we'll-get-to-specify-non-breaking-spaces-someday- despite-that-not-1-doc-in-100-needs-them-ly y'rs - tim From tim.one at home.com Fri Feb 16 07:34:38 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 01:34:38 -0500 Subject: [Python-Dev] Documentation Tools (was Unit Testing) In-Reply-To: Message-ID: [Andy Robinson] > ... > And Dinu, why don't you contact the doc-sig > administrator and find out why your membership is > blocked :-) That's Fred Drake, who I've copied on this. Dinu and Fred should talk directly if there's a problem. Membership in the doc-sig is open, and Fred couldn't block it even if he wanted to. http://mail.python.org/mailman/listinfo/doc-sig/ if-that-doesn't-work-there's-a-barry-bug-ly y'rs - tim PS: according to http://mail.python.org/mailman/roster/doc-sig Dinu is already a member. From ping at lfw.org Fri Feb 16 07:30:59 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 15 Feb 2001 22:30:59 -0800 (PST) Subject: [Python-Dev] Documentation Tools In-Reply-To: Message-ID: On Fri, 16 Feb 2001, Tim Peters wrote: > [Moshe Zadka] > > We had a DevDay, we have a sig, we have a PEP. None of this > > seems to help -- > > [Ka-Ping Yee] > > What are you talking about? There is an implementation and it works. > > There are many implementations "that work". But we haven't picked one. > What's the standard markup for Python docstrings? There isn't! That's what > he's talking about. That's exactly the point i'm trying to make. There isn't any markup format enforced by pydoc, precisely because it isn't worth the strife. Moshe seemed to imply that the set of deployable documentation tools was empty, and i take issue with that. His post also had an tone of hopelessness about the topic that i wanted to counter immediately. The fact that pydoc doesn't have a way to italicize doesn't make it a non-solution -- it's a perfectly acceptable solution! Fancy formatting features can come later. > This is especially bizarre because it's been clear for *years* that some > variant of structured text would win in the end, but nobody playing the game > likes all the details of anyone else's set of (IMO, all overly elaborate) > conventions, so the situation for users is no better now than it was the day > docstrings were added. > > Tibs's latest (and ongoing) attempt to reach a consensus can be found here: > > http://www.tibsnjoan.demon.co.uk/docutils/STpy.html > > The status of its implementation here: > > http://www.tibsnjoan.demon.co.uk/docutils/status.html > > Not close yet. The design and implementation of a standard structured text syntax is emphatically *not* a prerequisite for a useful documentation tool. I agree that it may be nice, and i certainly applaud Tony's efforts, but we should not wait for it. -- ?!ng From barry at digicool.com Fri Feb 16 07:40:34 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 16 Feb 2001 01:40:34 -0500 Subject: [Python-Dev] Documentation Tools (was Unit Testing) References: Message-ID: <14988.52067.135016.782124@anthem.wooz.org> >>>>> "TP" == Tim Peters writes: TP> if-that-doesn't-work-there's-a-barry-bug-ly y'rs - tim so-you-should-bug-barry-ly y'rs, -Barry From tim.one at home.com Fri Feb 16 09:05:10 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 03:05:10 -0500 Subject: [Python-Dev] Windows/Cygwin/MacOSX import (was RE: python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: Message-ID: [Michael Hudson] > ... > * Imports on case-insensitive file systems * > > There was quite some discussion about how to handle imports on a > case-insensitive file system (eg. on Windows). I didn't follow the > details, but Tim Peters is on the case (sorry), so I'm confident it > will get sorted out. You can be sure the whitespace will be consistent, anyway . OK, this one sucks. It should really have gotten a PEP, but it cropped up too late in the release cycle and it can't be delayed (see below). Here's the scoop: file systems vary across platforms in whether or not they preserve the case of filenames, and in whether or not the platform C library file-opening functions do or don't insist on case-sensitive matches: case-preserving case-destroying +-------------------+------------------+ case-sensitive | most Unix flavors | brrrrrrrrrr | +-------------------+------------------+ case-insensitive | Windows | some unfortunate | | MacOSX HFS+ | network schemes | | Cygwin | | +-------------------+------------------+ In the upper left box, if you create "fiLe" it's stored as "fiLe", and only open("fiLe") will open it (open("file") will not, nor will the 14 other variations on that theme). In the lower right box, if you create "fiLe", there's no telling what it's stored as-- but most likely as "FILE" --and any of the 16 obvious variations on open("FilE") will open it. The lower left box is a mix: creating "fiLe" stores "fiLe" in the platform directory, but you don't have to match case when opening it; any of the 16 obvious variations on open("FILe") work. NONE OF THAT IS CHANGING! Python will continue to follow platform conventions wrt whether case is preserved when creating a file, and wrt whether open() requires a case-sensitive match. In practice, you should always code as if matches were case-sensitive, else your program won't be portable. But then you should also always open binary files with the "b" flag, and you don't do that either . What's proposed is to change the semantics of Python "import" statements, and there *only* in the lower left box. Support for MaxOSX HFS+, and for Cygwin, is new in 2.1, so nothing is changing there. What's changing is Windows behavior. Here are the current rules for import on Windows: 1. Despite that the filesystem is case-insensitive, Python insists on a case-sensitive match. But not in the way the upper left box works: if you have two files, FiLe.py and file.py on sys.path, and do import file then if Python finds FiLe.py first, it raises a NameError. It does *not* go on to find file.py; indeed, it's impossible to import any but the first case-insensitive match on sys.path, and then only if case matches exactly in the first case-insensitive match. 2. An ugly exception: if the first case-insensitive match on sys.path is for a file whose name is entirely in upper case (FILE.PY or FILE.PYC or FILE.PYO), then the import silently grabs that, no matter what mixture of case was used in the import statement. This is apparently to cater to miserable old filesystems that really fit in the lower right box. But this exception is unique to Windows, for reasons that may or may not exist . 3. And another exception: if the envar PYTHONCASEOK exists, Python silently grabs the first case-insensitive match of any kind. So these Windows rules are pretty complicated, and neither match the Unix rules nor provide semantics natural for the native filesystem. That makes them hard to explain to Unix *or* Windows users. Nevertheless, they've worked fine for years, and in isolation there's no compelling reason to change them. However, that was before the MacOSX HFS+ and Cygwin ports arrived. They also have case-preserving case-insensitive filesystems, but the people doing the ports despised the Windows rules. Indeed, a patch to make HFS+ act like Unix for imports got past a reviewer and into the code base, which incidentally made Cygwin also act like Unix (but this met the unbounded approval of the Cygwin folks, so they sure didn't complain -- they had patches of their own pending to do this, but the reviewer for those balked). At a higher level, we want to keep Python consistent, and I in particular want Python to do the same thing on *all* platforms with case-preserving case-insensitive filesystems. Guido too, but he's so sick of this argument don't ask him to confirm that <0.9 wink>. The proposed new semantics for the lower left box: A. If the PYTHONCASEOK envar exists, same as before: silently accept the first case-insensitive match of any kind; raise ImportError if none found. B. Else search sys.path for the first case-sensitive match; raise ImportError if none found. #B is the same rule as is used on Unix, so this will improve cross-platform portability. That's good. #B is also the rule the Mac and Cygwin folks want (and wanted enough to implement themselves, multiple times, which is a powerful argument in PythonLand). It can't cause any existing non-exceptional Windows import to fail, because any existing non-exceptional Windows import finds a case-sensitive match first in the path -- and it still will. An exceptional Windows import currently blows up with a NameError or ImportError, in which latter case it still will, or in which former case will continue searching, and either succeed or blow up with an ImportError. #A is needed to cater to case-destroying filesystems mounted on Windows, and *may* also be used by people so enamored of "natural" Windows behavior that they're willing to set an envar to get it. That's their problem . I don't intend to implement #A for Unix too, but that's just because I'm not clear on how I *could* do so efficiently (I'm not going to slow imports under Unix just for theoretical purity). The potential damage is here: #2 (matching on ALLCAPS.PY) is proposed to be dropped. Case-destroying filesystems are a vanishing breed, and support for them is ugly. We're already supporting (and will continue to support) PYTHONCASEOK for their benefit, but they don't deserve multiple hacks in 2001. Flame at will. or-flame-at-tim-your-choice-ly y'rs - tim From martin at loewis.home.cs.tu-berlin.de Fri Feb 16 09:07:55 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Fri, 16 Feb 2001 09:07:55 +0100 Subject: [Python-Dev] threads and gethostbyname Message-ID: <200102160807.f1G87tG01454@mira.informatik.hu-berlin.de> > Under 1.5.2, this worked without a hitch. However, under 2.0, the > same program tends to lock up on some computers. I'm not 100% sure > (it's a bit hard to debug), but it looks like a global lock > problem... > Any ideas? Is this supposed to work at all? Can you post a short snippet demonstrating how exactly you initiate the DNS lookup, and how exactly you get the result back? I think it ought to work, and I'm not aware of a change that could cause it to break in 2.0. So far, in all cases where people reported "Tkinter and threading deadlocks", it turned out that the deadlock was in the application. Regards, Martin From tim.one at home.com Fri Feb 16 09:16:12 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 03:16:12 -0500 Subject: [Python-Dev] Documentation Tools In-Reply-To: Message-ID: [Ka-Ping Yee] > That's exactly the point i'm trying to make. There isn't any markup > format enforced by pydoc, precisely because it isn't worth the strife. > Moshe seemed to imply that the set of deployable documentation tools > was empty, and i take issue with that. His post also had an tone of > hopelessness about the topic that i wanted to counter immediately. Most programmers are followers in this matter, and I agree with Moshe on this point: until something is Officially Blessed, Python programmers will stay away from every gimmick in unbounded droves. I personally don't care whether markup is ever defined, because I already gave up on it. But I-- like you --won't wait forever for anything. We're not the norm. the-important-audience-isn't-pythondev-it's-pythonlist-ly y'rs - tim From mal at lemburg.com Fri Feb 16 09:56:15 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 09:56:15 +0100 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) References: <3A8C563D.D9BB6E3E@lemburg.com> <200102152328.SAA32032@cj20424-a.reston1.va.home.com> <200102152354.SAA32281@cj20424-a.reston1.va.home.com> Message-ID: <3A8CEB2F.2C4B35A4@lemburg.com> Guido van Rossum wrote: > > > > If it is truly opt-in (supposedly a configure option?), I'm all for > > > it. > > > > It is very much opt-in. > > > > > I recall vaguely though that Jeremy or Tim thought that the patch > > > touches lots of code even when one doesn't opt in. That was a no-no > > > so close before the a2 release. Anybody who actually looked at the > > > code got an opinion on that now? > > > > I suggest looking at the patch. Not at the code, but what it does as > > a diff: > > > > 1) Add a file Objects/obmalloc.c > > 2) Add stuff to configure.in & config.h to detect the --with-pymalloc > > argument to ./configure > > 3) Conditionally #include "obmalloc.h" in Objects/object.c if > > WITH_PYMALLOC is #defined > > 4) Conditionally #define the variables in Include/objimpl.h to #define > > the #defines needed to override the memory imiplementation if > > WITH_PYMALLOC is #defined > > > > And *that's it*. That's not my definition of "touches a lot of code". > > OK, I just looked, and I agree. BTW, for those who want to look, the > URL is: > > http://sourceforge.net/patch/?func=detailpatch&patch_id=101104&group_id=5470 > > This is currently assigned to Barry. Barry, can you see if this is > truly fit for inclusion? Or am I missing something? > > Note that there's a companion patch that adds a memory profiler: > > http://sourceforge.net/patch/?func=detailpatch&patch_id=101229&group_id=5470 > > Should this also be applied? Is there a reason why it shouldn't? Since both patches must be explicitely enabled by a configure switch I'd suggest to apply both of them -- this will give them much more testing. In the long run, I think that using such an allocator is better than trying maintain free lists for each type seperatly. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Fri Feb 16 10:24:41 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 04:24:41 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010215090551.J4924@xs4all.nl> Message-ID: [Thomas Wouters] > ... > I think what's missing is a library *tutorial*. How would that differ from the effbot guide (to the std library)? The Python (language) Tutorial can be pretty small, because the Python language is pretty small. But the libraries are massive, and growing, and are increasingly in the hands of people with no Unix experience, or even programming experience. So I suppose "tutorial" can mean many things. > The reference is exactly that, a reference, In part. In other parts (a good example is the profile docs) it's a lot of everything; in others it's so much "a reference" you can't figure out what it's saying unless you study the code (the pre-2.1 "random" docs sure come to mind). It's no more consistent in content level than anything else with umpteen authors. > and if we expand the reference we'll end up cursing it ourself, > should we ever need it. If the people who wanted "just a reference" were happy, I don't think David Beazley would have found an audience for his "Python Essential Reference". I can't argue about this, though, because nobody will ever agree. Guido doesn't want leisurely docs in the Reference Manual, nor does he like leisurely docs in docstrings. OTOH, those are what average and sub-average programmers *need*, and I write docs for them first, sneaking in examples when possible that I hope even experts will find pleasure in pondering. A good compromise by my lights-- and perhaps because I only care about the HTML docs, where "size" isn't apparent or a problem for navigation --would be to follow a terse but accurate reference with as many subsections as felt needed, with examples and rationale and tutorial material (has anyone ever figured how to use rexec or bastion from the docs? heh). But since nobody will agree with that either, I stick everything into docstrings and leave it to Fred to throw away the most useful parts for the "real docs" . > ... > I remember when I'd finished the Python tutorial and wondered where to > go next. I tried reading the library reference, but it was boring and > most of it not interesting (since it isn't built up to go from > seful/common -> rare, but just a list of all modules ordered by > service'.) Excellent point! I had the same question when I first learned Python, but at that time the libraries were maybe 10% of what's there now. I *still* didn't know where to go next. But I was pretty sure I didn't need the SGI multimedia libraries that occupied half the docs . > ... > I'll write the library tutorial once I finish the 'from-foo-import-* > considered harmful' chapter ;-) Hmm. Feel free to finish the listcomp PEP too . From mal at lemburg.com Fri Feb 16 10:53:50 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 10:53:50 +0100 Subject: [Python-Dev] Adding pymalloc to 2.1b1 ?! (python-dev summary, 2001-02-01 - 2001-02-15) References: <3A8C563D.D9BB6E3E@lemburg.com> <200102152328.SAA32032@cj20424-a.reston1.va.home.com> <200102152354.SAA32281@cj20424-a.reston1.va.home.com> <14988.43156.191949.342241@anthem.wooz.org> Message-ID: <3A8CF8AE.F819D17D@lemburg.com> "Barry A. Warsaw" wrote: > > >>>>> "GvR" == Guido van Rossum writes: > > GvR> This is currently assigned to Barry. Barry, can you see if > GvR> this is truly fit for inclusion? Or am I missing something? > > I think I was wary of applying it without the chance to run it through > Insure when it was enabled. I can put that on my list of things to do > for beta1. That's a good idea, but why should it stop you from checking the patch in ? After all, it's opt-in, so people using it will know that they are building non-standard stuff. Perhaps we ought to add a note '(experimental)' to the configure flags ?! -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thomas.heller at ion-tof.com Fri Feb 16 11:28:02 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Fri, 16 Feb 2001 11:28:02 +0100 Subject: [Python-Dev] Modulefinder? Message-ID: <02be01c09803$23fbc400$e000a8c0@thomasnotebook> Who is maintaining freeze/Modulefinder? I have some issues I would like to discuss... Thomas (Heller) From andy at reportlab.com Fri Feb 16 12:56:09 2001 From: andy at reportlab.com (Andy Robinson) Date: Fri, 16 Feb 2001 11:56:09 -0000 Subject: [Python-Dev] Documentation Tools (was Unit Testing) In-Reply-To: Message-ID: > That's Fred Drake, who I've copied on this. Dinu and Fred > should talk > directly if there's a problem. Membership in the doc-sig > is open, and Fred > couldn't block it even if he wanted to. Don't worry, it got resolved, and the problem was not of human origin :-) - Andy From thomas at xs4all.net Fri Feb 16 13:22:41 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 16 Feb 2001 13:22:41 +0100 Subject: [Python-Dev] Unit testing (again) In-Reply-To: ; from tim.one@home.com on Fri, Feb 16, 2001 at 04:24:41AM -0500 References: <20010215090551.J4924@xs4all.nl> Message-ID: <20010216132241.L4924@xs4all.nl> On Fri, Feb 16, 2001 at 04:24:41AM -0500, Tim Peters wrote: > [Thomas Wouters] > > ... > > I think what's missing is a library *tutorial*. > > How would that differ from the effbot guide (to the std library)? Not much, I bet, though I have to admit I haven't actually read the effbot guide ;-) It's just that going from the tutorial to the effbot guide (or any other book) is a fair-sized step, given that there are no pointers to them from the tutorial. I can't even *get* to the effbot guide from the documentation page (not with a decent number of clicks, anyway), not even through the PSA bookstore. > If the people who wanted "just a reference" were happy, I don't think David > Beazley would have found an audience for his "Python Essential Reference". Well, I never bought David's reference :) I only ever bought Programming Python, mostly because I saw it in a bookshop while I was in a post-tutorial, pre-usenet state ;) I'm also semi-permanently attached to the 'net, so the online docs at www.python.org are my best friend (next to docstrings, of course.) > A good compromise by my lights-- and perhaps because I only care about the > HTML docs, where "size" isn't apparent or a problem for navigation --would > be to follow a terse but accurate reference with as many subsections as felt > needed, with examples and rationale and tutorial material (has anyone ever > figured how to use rexec or bastion from the docs? heh). Definately +1 on that idea, well received or not it might be by others :) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From gregor at mediasupervision.de Fri Feb 16 13:34:16 2001 From: gregor at mediasupervision.de (Gregor Hoffleit) Date: Fri, 16 Feb 2001 13:34:16 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <20010205164557.B990@thrak.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Mon, Feb 05, 2001 at 04:45:57PM -0500 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> Message-ID: <20010216133416.A19356@mediasupervision.de> On Mon, Feb 05, 2001 at 04:45:57PM -0500, Andrew Kuchling wrote: > A more critical issue might be why people haven't adopted 2.0 yet; > there seems little reason is there to continue using 1.5.2, yet I > still see questions on the XML-SIG, for example, from people who > haven't upgraded. Is it that Zope doesn't support it? Or that Red > Hat and Debian don't include it? This needs fixing, or else we'll > wind up with a community scattered among lots of different versions. Sorry, I only got aware of this discussion when I read the recent python-dev summary. Here's the official word from Debian about this: Debian's unstable tree currently includes both Python 1.5.2 as well as 2.0. Python 1.5.2 things are packaged as python-foo-bar, while Python 2.0 is available as python2-foo-bar. It's possible to install either 1.5.2 or 2.0 or both of them. I have described the reasons for this dual packaging in /usr/share/doc/python2/README.why-python2 (included below): it's mainly about (a) backwards compatibility and (b) the license issue (the questionable GPL compatibility of the new license). The current setup shows a preference for the Python 1.5.2 packages: python1.5.2 is linked to /usr/bin/python, while python2.0 is linked to /usr/bin/python2; a simple upgrade won't install Python 2.0, but will stick with Python 1.5.2. Furthermore, python-base is now a "standard" package in Debian woody (will be installed by default on most systems!), while python2-base is only "optional". I made this setup to enforce maintainers of other packages to check if their package was compatible with Python 2.0, and--important as well--if they thought that the license of their package was compatible with the new Python license. (a) is clearly only a temporary issue (with Zope being a big point currently) and will go away over the time. (b) is much more difficult, and won't simply vanish over time. I know that most of you guys are fed up with license discussions. Still, I dare to bring this back to your attentions: Most people seem to ignore the fact that the FSF considers the new Python license incompatible with the GPL--the FSF might be wrong in fact, but I think it's not a fair way of dealing with licenses to simply *ignore* their words. If somebody could give me a legal advice that the Python license is in fact compatible with the GPL, and if this was accepted by the guys at debian-legal at lists.debian.org, I would happily adopt this opinion and that would make (b) go away as well. Until this happens, I think the best way for Debian to handle this situation (clearly not perfect!) is to use a per-case judgement--if there's GPL code in a package, ask the author if it's okay to use it with Python2 code. If he says alright, go on with packaging. If he says nogo (as the FSF did for readline), do away with the package (therefore python2-base doesn't include readline support). Gregor README.why-python2: ------------------ Why python2 ? ------------- Why are the Debian packages of Python 2.x called python2-base etc. instead of simply replacing the old python-base packages of version 1.5.2 ? Debian provides two sets of Python packages: - python-base etc. provides Python 1.5.2 - python2-base etc. provides Python 2.x. There are two major reasons for this: 1.) The transition from Python 1.5.2 to 2.0 is not completely flawless. There are a few incompatible changes in 2.0 that tend to break applications. E.g. Zope 2.2.5 is not yet prepared to work with Python 2.0. By providing both packages for Python 1.5.2 (python-*) and Python 2.0 (python2-*), the transition is much easier. 2.) The license of Python 2.0 has been changed, and restricted in some ways. According to the FSF, the license of Python 2.x is incompatible with the conditions of the General Public License (GPL). According to the FSF, the license of Python 2.x doesn't grant the licensee enough freedoms to use such code in a derived work together with code licensed under the GPL--this would result in a violation of the GPL. Other parties deny that this is indeed a violation of the GPL. Debian uses a significant portion of GPL code for which the FSF owns the copyright. In order to avoid legal conflicts over this, the python2-* packages are set up in a way that no GPL code will be used by default. It's the duty of maintainers of other packages to check if their license if compatible with the Python 2.x license, and then to repackage it accordingly (cf. python2/README.maintainers for hints). Jan 11, 2001 Gregor Hoffleit Last modified: 2000-01-11 From mal at lemburg.com Fri Feb 16 13:51:14 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 13:51:14 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> Message-ID: <3A8D2242.49966DD4@lemburg.com> Gregor Hoffleit wrote: > > If somebody could give me a legal advice that the Python license is in fact > compatible with the GPL, and if this was accepted by the guys at > debian-legal at lists.debian.org, I would happily adopt this opinion and that > would make (b) go away as well. > > Until this happens, I think the best way for Debian to handle this situation > (clearly not perfect!) is to use a per-case judgement--if there's GPL code > in a package, ask the author if it's okay to use it with Python2 code. If he > says alright, go on with packaging. Say, what kind of clause is needed in licenses to make them explicitly GPL-compatible without harming the license conditions in all other cases where the GPL is not involved ? > If he says nogo (as the FSF did for > readline), do away with the package (therefore python2-base doesn't include > readline support). Oh boy... about time we switch to editline as the default line editing package. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From gregor at mediasupervision.de Fri Feb 16 14:27:37 2001 From: gregor at mediasupervision.de (Gregor Hoffleit) Date: Fri, 16 Feb 2001 14:27:37 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <3A8D2242.49966DD4@lemburg.com>; from mal@lemburg.com on Fri, Feb 16, 2001 at 01:51:14PM +0100 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> Message-ID: <20010216142737.D30936@mediasupervision.de> On Fri, Feb 16, 2001 at 01:51:14PM +0100, M.-A. Lemburg wrote: > Gregor Hoffleit wrote: > > > > If somebody could give me a legal advice that the Python license is in fact > > compatible with the GPL, and if this was accepted by the guys at > > debian-legal at lists.debian.org, I would happily adopt this opinion and that > > would make (b) go away as well. > > > > Until this happens, I think the best way for Debian to handle this situation > > (clearly not perfect!) is to use a per-case judgement--if there's GPL code > > in a package, ask the author if it's okay to use it with Python2 code. If he > > says alright, go on with packaging. > > Say, what kind of clause is needed in licenses to make them explicitly > GPL-compatible without harming the license conditions in all other > cases where the GPL is not involved ? Hmm, during the great KDE confusion (KDE was GPL, and Qt was not compatible with the GPL), it was suggested that the authors of the KDE code should add this clause to their license boiler plate (cf. http://www.debian.org/News/1998/19981008): `This program is distributed under the GNU GPL v2, with the additional permission that it may be linked against Troll Tech's Qt library, and distributed, without the GPL applying to Qt'' (By the way, even the FSF uses a similar clause in the glibc license. The glibc license is the usual pointer to the GPL plus this clause: "As a special exception, if you link this library with files compiled with a GNU compiler to produce an executable, this does not cause the resulting executable to be covered by the GNU General Public License. This exception does not however invalidate any other reasons why the executable file might be covered by the GNU General Public License.") If you add something similar to your GPL code, that should work for the Python license, too. Evidently (cf. the URL above for an elaboration), the problem is that only the copyright holder of the code can add this clause. Your code with be perfectly compatible with pure GPL code, and it would be compatible with Python2 code. It would not be possible, though, to mix in some other pure GPL code, and link that with Python2 code--since the pure GPL code's license doesn't permit that. Silly, not ?? ;-) Gregor From thomas at xs4all.net Fri Feb 16 15:14:17 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 16 Feb 2001 15:14:17 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <20010216142737.D30936@mediasupervision.de>; from gregor@mediasupervision.de on Fri, Feb 16, 2001 at 02:27:37PM +0100 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> Message-ID: <20010216151417.M4924@xs4all.nl> On Fri, Feb 16, 2001 at 02:27:37PM +0100, Gregor Hoffleit wrote: > (By the way, even the FSF uses a similar clause in the glibc license. The > glibc license is the usual pointer to the GPL plus this clause: > "As a special exception, if you link this library with files > compiled with a GNU compiler to produce an executable, this does > not cause the resulting executable to be covered by the GNU General > Public License. This exception does not however invalidate any > other reasons why the executable file might be covered by the GNU > General Public License.") So... if you link glibc with files compiled by a NON-GNU compiler, the resulting binary *has to be* glibc ? That's, well, fucked, if you pardon my french. But it's not my code, so all I can do is sigh ;-P > Evidently (cf. the URL above for an elaboration), the problem is that only > the copyright holder of the code can add this clause. Exactly. In this case, it's CNRI that dictates the licence, and they apparently are/were not convinced the license *isn't* compatible with the GPL, so they see no need to further muddle (or reduce the strength of) their licence. > Silly, not ?? ;-) Definately. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mal at lemburg.com Fri Feb 16 15:34:07 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 15:34:07 +0100 Subject: [Python-Dev] Re: Python 2.0 in Debian References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> Message-ID: <3A8D3A5F.C9CD094C@lemburg.com> Gregor Hoffleit wrote: > > On Fri, Feb 16, 2001 at 01:51:14PM +0100, M.-A. Lemburg wrote: > > Gregor Hoffleit wrote: > > > > > > If somebody could give me a legal advice that the Python license is in fact > > > compatible with the GPL, and if this was accepted by the guys at > > > debian-legal at lists.debian.org, I would happily adopt this opinion and that > > > would make (b) go away as well. > > > > > > Until this happens, I think the best way for Debian to handle this situation > > > (clearly not perfect!) is to use a per-case judgement--if there's GPL code > > > in a package, ask the author if it's okay to use it with Python2 code. If he > > > says alright, go on with packaging. > > > > Say, what kind of clause is needed in licenses to make them explicitly > > GPL-compatible without harming the license conditions in all other > > cases where the GPL is not involved ? > > Hmm, during the great KDE confusion (KDE was GPL, and Qt was not compatible > with the GPL), it was suggested that the authors of the KDE code should add > this clause to their license boiler plate (cf. > http://www.debian.org/News/1998/19981008): > > `This program is distributed under the GNU GPL v2, with the > additional permission that it may be linked against Troll Tech's Qt > library, and distributed, without the GPL applying to Qt'' Uhm, that's backwards from what I had in mind with the question. Sorry for not being more to the point. Here's the "problem" I have: I want to put my code under a license similar to the Python 2 license (that is including the choice of law clause which caused all this trouble). Since some of my code is already being used by GPL-software out there,I would like to add some kind of extra-clause to the license which permits the GPL-code authors to the new versions as well. This is somewhat similar to the problem that Python2 has with the GPL; don't know how CNRI is going to change the license for 1.6.1, but I want to include something similar in my license. Anyway, since Debian is very sensitive to these issues, I thought I'd ask you for a possible way out. Thanks, -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From gregor at mediasupervision.de Fri Feb 16 15:51:26 2001 From: gregor at mediasupervision.de (Gregor Hoffleit) Date: Fri, 16 Feb 2001 15:51:26 +0100 Subject: [Python-Dev] Re: Python 2.0 in Debian In-Reply-To: <3A8D3A5F.C9CD094C@lemburg.com>; from mal@lemburg.com on Fri, Feb 16, 2001 at 03:34:07PM +0100 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> <3A8D3A5F.C9CD094C@lemburg.com> Message-ID: <20010216155125.E30936@mediasupervision.de> On Fri, Feb 16, 2001 at 03:34:07PM +0100, M.-A. Lemburg wrote: > Here's the "problem" I have: I want to put my code under a license > similar to the Python 2 license (that is including the choice of > law clause which caused all this trouble). Why don't you simply remove the first sentence of this clause ("This License Agreement shall be governed by and interpreted in all respects by the law of the State of Virginia, excluding conflict of law provisions.") ? Is there any reason for you to include this choice of law clause anyway, if you don't live in Virginia ? Gregor > Since some of my code is already being used by GPL-software > out there,I would like to add some kind of extra-clause to > the license which permits the GPL-code authors to the new versions > as well. > > This is somewhat similar to the problem that Python2 has with the GPL; > don't know how CNRI is going to change the license for 1.6.1, but I > want to include something similar in my license. From mal at lemburg.com Fri Feb 16 16:24:03 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 16:24:03 +0100 Subject: [Python-Dev] Re: Python 2.0 in Debian References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> <3A8D3A5F.C9CD094C@lemburg.com> <20010216155125.E30936@mediasupervision.de> Message-ID: <3A8D4613.551021EB@lemburg.com> Gregor Hoffleit wrote: > > On Fri, Feb 16, 2001 at 03:34:07PM +0100, M.-A. Lemburg wrote: > > Here's the "problem" I have: I want to put my code under a license > > similar to the Python 2 license (that is including the choice of > > law clause which caused all this trouble). > > Why don't you simply remove the first sentence of this clause ("This License > Agreement shall be governed by and interpreted in all respects by the law of > the State of Virginia, excluding conflict of law provisions.") ? > > Is there any reason for you to include this choice of law clause anyway, if > you don't live in Virginia ? I have to make the governing law the German law since that is where my company is located. The text from my version is: """ This License Agreement shall be governed by and interpreted in all respects by the law of Germany, excluding conflict of law provisions. It shall not be governed by the United Nations Convention on Contracts for International Sale of Goods. """ Does anyone know of the wording of the new 1.6.1 license ? -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From fdrake at acm.org Fri Feb 16 16:23:18 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 16 Feb 2001 10:23:18 -0500 (EST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: References: <000801c097a0$41397520$e46940d5@hagrid> Message-ID: <14989.17894.829429.368417@cj42289-a.reston1.va.home.com> Tim Peters writes: > Oh, ya, "[" has to be excluded because the listcomp itself temporarily > creates an artificial name beginning with "[". Wow! Perhaps listcomps should use names like _[1] instead, just to reduce the number of special cases. -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From gregor at mediasupervision.de Fri Feb 16 16:47:44 2001 From: gregor at mediasupervision.de (Gregor Hoffleit) Date: Fri, 16 Feb 2001 16:47:44 +0100 Subject: [Python-Dev] Re: Python 2.0 in Debian In-Reply-To: <3A8D4613.551021EB@lemburg.com>; from mal@lemburg.com on Fri, Feb 16, 2001 at 04:24:03PM +0100 References: <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> <3A8D3A5F.C9CD094C@lemburg.com> <20010216155125.E30936@mediasupervision.de> <3A8D4613.551021EB@lemburg.com> Message-ID: <20010216164744.F30936@mediasupervision.de> On Fri, Feb 16, 2001 at 04:24:03PM +0100, M.-A. Lemburg wrote: > Gregor Hoffleit wrote: > > Is there any reason for you to include this choice of law clause anyway, if > > you don't live in Virginia ? > > I have to make the governing law the German law since that is where > my company is located. The text from my version is: > > """ > This License Agreement shall be governed by and interpreted in all > respects by the law of Germany, excluding conflict of law > provisions. It shall not be governed by the United Nations Convention > on Contracts for International Sale of Goods. > """ Well, I guess that beyond my legal scope (why is it reasonable to exclude that UN Convention ?), and certainly it gets quite off-topic on this list. Is it really necessary to make a choice of law, and how does it help you? (I mean, the GPL, the X11 license, BSD-like licenses, the Apache license and the old Python license all work without such a clause). AFAIK, RMS and his lawyer say that any restriction on the choice of law is incompatible with the GPL, therefore I don't see how you could include such a clause in the license and still make it compatible with the GPL. If you're interested in some opinions from Debian, would you mind to send a mail to debian-legal at lists.debian.org and ask there for comments ? Have you considered mailing to licensing at gnu.org and ask them for their opinion ? > > Does anyone know of the wording of the new 1.6.1 license ? I didn't even knew there will be a 1.6.1 release. Will there be a change in the license ? Gregor From fdrake at acm.org Fri Feb 16 17:19:28 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 16 Feb 2001 11:19:28 -0500 (EST) Subject: [Python-Dev] Unit testing (again) In-Reply-To: <20010216132241.L4924@xs4all.nl> References: <20010215090551.J4924@xs4all.nl> <20010216132241.L4924@xs4all.nl> Message-ID: <14989.21264.954177.217422@cj42289-a.reston1.va.home.com> On Fri, Feb 16, 2001 at 04:24:41AM -0500, Tim Peters wrote: > be to follow a terse but accurate reference with as many subsections as felt > needed, with examples and rationale and tutorial material (has anyone ever > figured how to use rexec or bastion from the docs? heh). Thomas Wouters writes: > Definately +1 on that idea, well received or not it might be by others :) So what sections can I expect you two to write for the Python 2.1 documentation? -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From sdm7g at virginia.edu Fri Feb 16 18:32:49 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Fri, 16 Feb 2001 12:32:49 -0500 (EST) Subject: [Python-Dev] platform specific files Message-ID: On macosx, besides the PyObjC (i.e.NextStep/OpenStep/Cocoa) module, I now have a good chunk of the MacOS Carbon based toolkit modules ported (though not tested): Python 2.1a2 (#1, 02/12/01, 19:49:54) [GCC Apple DevKit-based CPP 5.0] on Darwin1.2 Type "copyright", "credits" or "license" for more information. >>> import Carbon >>> dir(Carbon) ['AE', 'App', 'Cm', 'ColorPicker', 'Ctl', 'Dlg', 'Drag', 'Evt', 'Fm', 'HtmlRender', 'Icn', 'List', 'Menu', 'Qd', 'Qdoffs', 'Res', 'Scrap', 'Snd', 'TE', 'Win', '__doc__', '__file__', '__name__', 'macfs'] >>> Jack has always maintained the Mac distribution separately, but that was largely because the Metrowerks compiler environment was radically different from unix make/gcc and friends. That's no longer the case on macosx. ( Although, it looks like we will end up, for a while, at least, with 3 versions on OSX: Classic, Carbonized-MacPython, and the unix build of Python with Carbon and Cocoa libs. ) I note that 2.1a2 still has BeOS and PC specific directories, although the Nt & sgi directories that were in older releases are gone. I'm guessing the current wish is to keep as much platform dependent stuff as possible separate and managed with disutils, and construct separate platform-specific distributions my merging them on each release. How is all of this handled in the various Windows distributions ? ( And in the light of that, is there anything particular I should avoid? ) -- Steve M. From skip at mojam.com Fri Feb 16 19:28:06 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 16 Feb 2001 12:28:06 -0600 (CST) Subject: [Python-Dev] Re: Upgrade? Not for some time... (fwd) Message-ID: <14989.28982.533172.930519@beluga.mojam.com> FYI, for those of you who don't read c.l.py on a regular basis. Skip -------------- next part -------------- An embedded message was scrubbed... From: Steve Purcell Subject: Re: Upgrade? Not for some time... Date: Fri, 16 Feb 2001 09:35:38 +0100 Size: 2595 URL: From moshez at zadka.site.co.il Fri Feb 16 19:34:37 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Fri, 16 Feb 2001 20:34:37 +0200 (IST) Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <20010216151417.M4924@xs4all.nl> References: <20010216151417.M4924@xs4all.nl>, <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> Message-ID: <20010216183437.4C374A840@darjeeling.zadka.site.co.il> On Fri, 16 Feb 2001 15:14:17 +0100, Thomas Wouters wrote: > So... if you link glibc with files compiled by a NON-GNU compiler, the > resulting binary *has to be* glibc ? That's, well, fucked, if you pardon my > french. But it's not my code, so all I can do is sigh ;-P Thomas, glibc is not currently supported on any non-GNU systems (and for the sake of this discussion, NetBSD/FreeBSD/OpenBSD are GNU systems too, since the only compiler that works there is gcc) More, glibc uses so many gcc extensions that you probably will have a hard time getting it to compile with any other compiler. -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez at debian.org |http://www.{python,debian,gnu}.org From jeremy at alum.mit.edu Fri Feb 16 20:27:36 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 16 Feb 2001 14:27:36 -0500 (EST) Subject: [Python-Dev] __all__ for pickle Message-ID: <14989.32552.239767.38203@w221.z064000254.bwi-md.dsl.cnc.net> I was just testing Zope with the latest CVS python and ran into trouble with the pickle module. The module has grown an __all__ attribute: __all__ = ["PickleError", "PicklingError", "UnpicklingError", "Pickler", "Unpickler", "dump", "dumps", "load", "loads"] This definition excludes a lot of other names defined at the module level, like all of the constants for the pickle format, e.g. MARK, STOP, POP, PERSID, etc. It also excludes format_version and compatible_formats. I don't understand why these names were excluded from __all__. The Zope code uses "from pickle import *" and writes a custom pickler extension. It needs to have access to these names to be compatible, and I can't think of a good reason to forbid it. What's the right solution? Zap the __all__ attribute; the namespace pollution that results is fairly small (marshal, sys, struct, the contents of tupes). Make __all__ a really long list? I wonder how much breakage we should impose on people who use "from ... import *" for Python 2.1. As you know, I was an early advocate of the it's-sloppy-so-let-em-suffer philosophy, but I have learned the error of my ways. I worry that people will be unhappy with __all__ if other modules suffer from similar code breakage. Has __all__ been described by a PEP? If so, it ought to be posted to c.l.py for discussion. If not, we should probably write a short PEP. It would probably be a page of text, but it would help clarify that confusion that persists about what __all__ is for and what its consequences are. Jeremy From tim.one at home.com Fri Feb 16 20:53:09 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 14:53:09 -0500 Subject: [Python-Dev] __all__ for pickle In-Reply-To: <14989.32552.239767.38203@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: [Jeremy Hylton] > ... > Has __all__ been described by a PEP? No. IIRC, it popped up when Guido approved of a bulletproof __exports__ patch, and subsequent complaints revealed that was controversial. Then __all__ somehow made it in without opposition, in analogy with the special __all__ attribute of __init__.py files (which doesn't appear to have made it into the Lang or Lib refs, but is documented in Guido's package essay on python.org, and in the Tutorial(?!)). > ... > If not, we should probably write a short PEP. It would probably > be a page of text, but it would help clarify that confusion that > persists about what __all__ is for and what its consequences are. I agree, but if someone can make time for that I'd much rather see Guido's package essay folded into the Lang Ref first. Packages have been part of the language since 1.5 ... From mal at lemburg.com Fri Feb 16 21:17:51 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 21:17:51 +0100 Subject: [Python-Dev] __all__ for pickle References: <14989.32552.239767.38203@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A8D8AEF.3233507F@lemburg.com> Jeremy Hylton wrote: > > I was just testing Zope with the latest CVS python and ran into > trouble with the pickle module. > > The module has grown an __all__ attribute: > > __all__ = ["PickleError", "PicklingError", "UnpicklingError", "Pickler", > "Unpickler", "dump", "dumps", "load", "loads"] > > This definition excludes a lot of other names defined at the module > level, like all of the constants for the pickle format, e.g. MARK, > STOP, POP, PERSID, etc. It also excludes format_version and > compatible_formats. > > I don't understand why these names were excluded from __all__. The > Zope code uses "from pickle import *" and writes a custom pickler > extension. It needs to have access to these names to be compatible, > and I can't think of a good reason to forbid it. I guess it was a simple oversight. Why not add the constants to the above list ?! > What's the right solution? Zap the __all__ attribute; the namespace > pollution that results is fairly small (marshal, sys, struct, the > contents of tupes). Make __all__ a really long list? The latter, I guess. Some lambda hackery ought to fix this elegantly. > I wonder how much breakage we should impose on people who use "from > ... import *" for Python 2.1. As you know, I was an early advocate of > the it's-sloppy-so-let-em-suffer philosophy, but I have learned the > error of my ways. I worry that people will be unhappy with __all__ if > other modules suffer from similar code breakage. IMHO, we should try to get this right now, rather than later. The betas will get enough testing to reduce the breakage below the noise level. If there's still serious breakage somewhere, then patches will be simple: just comment out the __all__ definition -- even newbies will be able to do this (and feel great about it ;-). Besides, the __all__ attribute is a good way to define a module API and certainly can be put to good use in future Python versions, e.g. by declaring those module attribute read-only and pre-fetching them into function locals... > Has __all__ been described by a PEP? If so, it ought to be posted to > c.l.py for discussion. If not, we should probably write a short PEP. > It would probably be a page of text, but it would help clarify that > confusion that persists about what __all__ is for and what its > consequences are. No, there's no PEP for it. The reason is that __all__ has been in existence for quite a few years already. Previously it was just used for packages -- the patch just extended it's scope to simple modules. It is documented in the tutorial and the API docs, plus in Guido's essays. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From thomas at xs4all.net Fri Feb 16 21:37:52 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 16 Feb 2001 21:37:52 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <20010216183437.4C374A840@darjeeling.zadka.site.co.il>; from moshez@zadka.site.co.il on Fri, Feb 16, 2001 at 08:34:37PM +0200 References: <20010216151417.M4924@xs4all.nl>, <14975.6541.43230.433954@beluga.mojam.com> <20010205164557.B990@thrak.cnri.reston.va.us> <20010216133416.A19356@mediasupervision.de> <3A8D2242.49966DD4@lemburg.com> <20010216142737.D30936@mediasupervision.de> <20010216151417.M4924@xs4all.nl> <20010216183437.4C374A840@darjeeling.zadka.site.co.il> Message-ID: <20010216213751.F22571@xs4all.nl> On Fri, Feb 16, 2001 at 08:34:37PM +0200, Moshe Zadka wrote: > On Fri, 16 Feb 2001 15:14:17 +0100, Thomas Wouters wrote: > > So... if you link glibc with files compiled by a NON-GNU compiler, the > > resulting binary *has to be* glibc [I meant GPL] ? That's, well, fucked, > > if you pardon my french. But it's not my code, so all I can do is sigh > > ;-P > Thomas, glibc is not currently supported on any non-GNU systems (and for the > sake of this discussion, NetBSD/FreeBSD/OpenBSD are GNU systems too, since > the only compiler that works there is gcc) > More, glibc uses so many gcc extensions that you probably will have a hard > time getting it to compile with any other compiler. That depends. Is a fork of gcc, sprouting all the features of gcc, a GNU compiler ? We're not talking technicalities here, we're talking legalities. "What's in a name" is no longer a rhetorical question in that context :) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Fri Feb 16 21:56:03 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 15:56:03 -0500 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <20010216133416.A19356@mediasupervision.de> Message-ID: [Gregor Hoffleit] > ... > I know that most of you guys are fed up with license discussions. Still, > I dare to bring this back to your attentions: Don't apologize -- the license remains an important issue to the Python developers too. We rarely mention it in public anymore simply because there's not yet anything new to say, while everything old has already been repeated countless times. > Most people seem to ignore the fact that the FSF considers the new Python > license incompatible with the GPL--the FSF might be wrong in fact, but I > think it's not a fair way of dealing with licenses to simply *ignore* > their words. Absolutely, and until this is resolved I urge that-- regardless of the legalities, and unless you're looking to pick a legal fight --everyone presume the copyright holder's position is correct. For me that's got nothing to do with the law, it's simply respecting the wishes of the people who own the code. > If somebody could give me a legal advice that the Python license > is in fact compatible with the GPL, and if this was accepted by the > guys at debian-legal at lists.debian.org, I would happily adopt this > opinion and that would make (b) go away as well. Let's not even go there. Nothing legal is ever settled "for good" in the US. This tack is hopeless. The FSF and CNRI are still talking about this! There's still hope that it will be resolved between them. If they can agree on a compromise, we'll move as quickly as possible to implement it. Indeed, those who read the Python checkin msgs have hints that we're very optimistic about a friendly resolution. But we've got no control over when that may happen, and the negotiations so far have proceeded at a pace that can only be described as glacial. > ... > Until this happens, I think the best way for Debian to handle this > situation (clearly not perfect!) is to use a per-case judgement--if > there's GPL code in a package, ask the author if it's okay to use > it with Python2 code. If he says alright, go on with packaging. If > he says nogo (as the FSF did for readline), do away with the package > (therefore python2-base doesn't include readline support). I personally agree that's the best compromise we can get for now, and greatly appreciate your willingness to endure this much special-case fiddling on Python's behalf! We'll continue to do all that we can to ensure that you won't have to endure this the next time around. although-that's-rather-like-saying-we'll-do-all-we-can-to-ensure- the-sun-doesn't-go-nova -ly y'rs - tim From tim.one at home.com Fri Feb 16 22:24:10 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 16:24:10 -0500 Subject: [Python-Dev] Unit testing (again) In-Reply-To: <14989.21264.954177.217422@cj42289-a.reston1.va.home.com> Message-ID: [Fred L. Drake, Jr.] > So what sections can I expect you two to write for the Python 2.1 > documentation? I'm waiting for you to clear the backlog of the ones I've already written . From tim.one at home.com Fri Feb 16 22:45:01 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 16:45:01 -0500 Subject: [Python-Dev] Re: Python 2.0 in Debian In-Reply-To: <20010216164744.F30936@mediasupervision.de> Message-ID: [Gregor Hoffleit] > I didn't even knew there will be a 1.6.1 release. Will there be a > change in the license ? There will be a 1.6.1 release if and only if CNRI and the FSF reach agreement. If and when that happens, we (PythonLabs) will build a 1.6.1 release for CNRI with the new license, and then re-release the then-current Python as a derivative of 1.6.1. But it's premature to talk about that, because nothing is settled yet, and it doesn't address the license inherited from BeOpen.com. MAL, a choice-of-clause clause won't work any better for you (in the FSF's eyes) than it did for CNRI. Gregor, legal language is ambiguous. That's why virtually all *commercial* licenses in the US contain a choice-of-law clause ("of the 50 possible meanings of this phrase, I intended this specific one"). *If* and when somebody actually prevails in suing an open source provider due to the lack of choice-of-law, non-commercial providers will have a lot to think about here (it's easy to be complacent when you've never been burned). Here's a paradox: the FSF objects to choice-of-law because they don't want the language interpreted by the courts in the Kingdom of Unfreedonia (who could effectively negate the GPL's intent). CNRI objects to not having choice-of-law because they don't want the language interpreted by the courts in the Kingdom of Unlimited Liability (who could effectively negate all of CNRI's liability disclaimers). So in that sense, they're both seeking similar ends. That's why there's still hope for compromise. it-would-be-interesting-if-it-were-happening-to-somebody-else -ly y'rs - tim From tim.one at home.com Fri Feb 16 22:55:45 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 16:55:45 -0500 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) In-Reply-To: <3A8D2242.49966DD4@lemburg.com> Message-ID: [M.-A. Lemburg] > Say, what kind of clause is needed in licenses to make them explicitly > GPL-compatible without harming the license conditions in all other > cases where the GPL is not involved ? You can dual-license (see, e.g., Perl). From skip at mojam.com Fri Feb 16 23:00:02 2001 From: skip at mojam.com (Skip Montanaro) Date: Fri, 16 Feb 2001 16:00:02 -0600 (CST) Subject: [Python-Dev] Re: __all__ for pickle In-Reply-To: <14989.32552.239767.38203@w221.z064000254.bwi-md.dsl.cnc.net> References: <14989.32552.239767.38203@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14989.41698.490018.793622@beluga.mojam.com> Jeremy> I was just testing Zope with the latest CVS python and ran into Jeremy> trouble with the pickle module. Jeremy> The module has grown an __all__ attribute: Jeremy> __all__ = ["PickleError", "PicklingError", "UnpicklingError", "Pickler", Jeremy> "Unpickler", "dump", "dumps", "load", "loads"] Jeremy> This definition excludes a lot of other names defined at the Jeremy> module level, like all of the constants for the pickle format, Jeremy> e.g. MARK, STOP, POP, PERSID, etc. It also excludes Jeremy> format_version and compatible_formats. In deciding what to include in __all__ up to this point I have only had my personal experience with the modules and the documentation to help me decide what to include. My initial assumption was that undocumented module-level constants were not to be exported. I just added the following to my version of pickle: __all__.extend([x for x in dir() if re.match("[A-Z][A-Z0-9_]*$",x)]) That seems to catch all the defined constants. Let me know if that's sufficient in this case. Skip From tim.one at home.com Fri Feb 16 23:44:06 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 16 Feb 2001 17:44:06 -0500 Subject: [Python-Dev] Re: __all__ for pickle In-Reply-To: <14989.41698.490018.793622@beluga.mojam.com> Message-ID: [Skip Montanaro] > In deciding what to include in __all__ up to this point I have only had > my personal experience with the modules and the documentation to help > me decide what to include. My initial assumption was that undocumented > module-level constants were not to be exported. And it's been a very educational exercise! Thank you for pursuing it. The fact is we often don't know what authors intended to export, and it's Good to try to make that explicit. I'm still not sure I've got any use for __all__, though . sure-"a-problem"-has-been-identified-but-not-sure-the-solution- has-been-ly y'rs - tim From mal at lemburg.com Fri Feb 16 23:22:23 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri, 16 Feb 2001 23:22:23 +0100 Subject: Python 2.0 in Debian (was: Re: [Python-Dev] PEPS, version control, release intervals) References: Message-ID: <3A8DA81F.55DCF038@lemburg.com> Tim Peters wrote: > > [M.-A. Lemburg] > > Say, what kind of clause is needed in licenses to make them explicitly > > GPL-compatible without harming the license conditions in all other > > cases where the GPL is not involved ? > > You can dual-license (see, e.g., Perl). Right and it looks as if this is the only way out: either you give people (including commercial users) more freedom in the use of the code and add a choice-of-law clause or you restrain usage to GPLed code and cross fingers that noone is going to sue the hell out of you... doesn't really matter if the opponent lives in Kingdom of Unlimited Liability or not -- the costs of finding out which law to apply and where to settle the dispute would already suffice to bring the open source developer down to his/her knees. Oh well, -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From tim.one at home.com Sat Feb 17 06:31:31 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 17 Feb 2001 00:31:31 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre_constants.py (etc) In-Reply-To: <14989.17894.829429.368417@cj42289-a.reston1.va.home.com> Message-ID: [Tim] > Oh, ya, "[" has to be excluded because the listcomp itself temporarily > creates an artificial name beginning with "[". [Fred L. Drake, Jr.] > Wow! Perhaps listcomps should use names like _[1] instead, just to > reduce the number of special cases. Well, it seems a terribly minor point ... so I dropped everything else and checked in a change to do just that . every-now-&-again-you-gotta-do-something-just-cuz-it's-right-ly y'rs - tim From skip at mojam.com Sat Feb 17 16:29:34 2001 From: skip at mojam.com (Skip Montanaro) Date: Sat, 17 Feb 2001 09:29:34 -0600 (CST) Subject: [Python-Dev] Re: __all__ for pickle In-Reply-To: References: <14989.41698.490018.793622@beluga.mojam.com> Message-ID: <14990.39134.483892.880071@beluga.mojam.com> Tim> I'm still not sure I've got any use for __all__, though . That may be true. I think the canonical case that is being defended against is a module-level symbol in one module obscuring a builtin, e.g.: # a.py def list(s): return s # b.py from a import * ... l = list(('a','b','c')) I suspect in the long-run there's a better way to accomplish this than adding __all__ to most Python modules, perhaps pylint. Which reminds me... I did write something once upon a time to catch symbols that hide builtins, only at more than the module level: http://musi-cal.mojam.com/~skip/python/hiding.py Skip From ping at lfw.org Sun Feb 18 11:43:45 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sun, 18 Feb 2001 02:43:45 -0800 (PST) Subject: [Python-Dev] Join python-iter@yahoogroups.com to discuss PEP 234 Message-ID: Hello all, I just wanted to let you know that i'm trying to move the PEP 234 and iterator discussion over to Greg's mailing list, python-iter at yahoogroups.com. Greg set it up quite a while ago but i didn't have time to respond to anything then. Today i had time to send a few messages to the group and i'd like to start the discussion up again. If you're interested in talking about it, please join! http://groups.yahoo.com/group/python-iter Thanks! -- ?!ng From barry at scottb.demon.co.uk Sun Feb 18 14:01:06 2001 From: barry at scottb.demon.co.uk (Barry Scott) Date: Sun, 18 Feb 2001 13:01:06 -0000 Subject: [I18n-sig] Re: [Python-Dev] Pre-PEP: Python Character Model In-Reply-To: Message-ID: <001001c099aa$daebf240$060210ac@private> > Here's a thought. How about BinaryFile/BinarySocket/ByteArray which > do Files and sockets often contain a both string and binary data. Having StringFile and BinaryFile seems the wrong split. I'd think being able to write string and binary data is more useful for example having methods on file and socket like file.writetext, file.writebinary. NOw I can use the writetext to write the HTTP headers and writebinary to write the JPEG image say. BArry From zessin at decus.de Sun Feb 18 17:23:26 2001 From: zessin at decus.de (zessin at decus.de) Date: Sun, 18 Feb 2001 17:23:26 +0100 Subject: [Python-Dev] OpenVMS import (was Re: Windows/Cygwin/MacOSX import (was RE: python-dev summary, 2001-02-01 - 2001-02-15) Message-ID: <009F7D57.F76B21F7.2@decus.de> Cameron Laird wrote: >In article , >Tim Peters wrote: >>[Michael Hudson] >>> ... >>> * Imports on case-insensitive file systems * >>> >>> There was quite some discussion about how to handle imports on a >>> case-insensitive file system (eg. on Windows). I didn't follow the >>> details, but Tim Peters is on the case (sorry), so I'm confident it >>> will get sorted out. >> >>You can be sure the whitespace will be consistent, anyway . > . > . > . >>them is ugly. We're already supporting (and will continue to support) >>PYTHONCASEOK for their benefit, but they don't deserve multiple hacks in >>2001. >> >>Flame at will. >> >>or-flame-at-tim-your-choice-ly y'rs - tim > >1. Thanks. Along with all the other benefits, I find > this explanation FAR more entertaining than anything > network television broadcasts (although nearly as > tendentious as "The West Wing"). >2. I hope a few OS/400 and OpenVMS refugees convert and > walk through the door soon. *That* would make for a > nice dose of fun. Let's see if I can explain the OpenVMS part. I'll do so by walking over Tim's text. (I'll step carefully over it. I don't intend to destroy it, Tim ;-) ] Here's the scoop: file systems vary across platforms in whether or not they ] preserve the case of filenames, and in whether or not the platform C library ] file-opening functions do or don't insist on case-sensitive matches: ] ] ] case-preserving case-destroying ] +-------------------+------------------+ ] case-sensitive | most Unix flavors | brrrrrrrrrr | ] +-------------------+------------------+ ] case-insensitive | Windows | some unfortunate | ] | MacOSX HFS+ | network schemes | ] | Cygwin | | | | OpenVMS | ] +-------------------+------------------+ Phew. I'm glad we're only 'unfortunate' and not in the 'brrrrrrrrrr' section ;-) ] In the upper left box, if you create "fiLe" it's stored as "fiLe", and only ] open("fiLe") will open it (open("file") will not, nor will the 14 other ] variations on that theme). ] In the lower right box, if you create "fiLe", there's no telling what it's ] stored as-- but most likely as "FILE" --and any of the 16 obvious variations ] on open("FilE") will open it. >>> f = open ('fiLe', 'w') $ directory f* Directory DSA3:[PYTHON.PYTHON-2_1A2CVS.VMS] FILE.;1 >>> f = open ('filE', 'r') >>> f >>> This is on the default file system (ODS-2). Only very recent versions of OpenVMS Alpha (V7.2 and up) support the ODS-5 FS that has Windows-like behaviour (case-preserving,case-insensitive), but many sites don't use it (yet). Also, there are many older versions running in the field that don't get upgraded any time soon. ] The lower left box is a mix: creating "fiLe" stores "fiLe" in the platform ] directory, but you don't have to match case when opening it; any of the 16 ] obvious variations on open("FILe") work. Same here. ] What's proposed is to change the semantics of Python "import" statements, ] and there *only* in the lower left box. ] ] Support for MaxOSX HFS+, and for Cygwin, is new in 2.1, so nothing is ] changing there. What's changing is Windows behavior. Here are the current ] rules for import on Windows: ] ] 1. Despite that the filesystem is case-insensitive, Python insists on ] a case-sensitive match. But not in the way the upper left box works: ] if you have two files, FiLe.py and file.py on sys.path, and do ] ] import file ] ] then if Python finds FiLe.py first, it raises a NameError. It does ] *not* go on to find file.py; indeed, it's impossible to import any ] but the first case-insensitive match on sys.path, and then only if ] case matches exactly in the first case-insensitive match. For OpenVMS I have just changed 'import.c': MatchFilename() and some code around it is not executed. ] 2. An ugly exception: if the first case-insensitive match on sys.path ] is for a file whose name is entirely in upper case (FILE.PY or ] FILE.PYC or FILE.PYO), then the import silently grabs that, no matter ] what mixture of case was used in the import statement. This is ] apparently to cater to miserable old filesystems that really fit in ] the lower right box. But this exception is unique to Windows, for ] reasons that may or may not exist . I guess that is Windows-specific code? Something to do with 'allcaps8x3()'? ] 3. And another exception: if the envar PYTHONCASEOK exists, Python ] silently grabs the first case-insensitive match of any kind. The check is in 'check_case()', but there is no OpenVMS implementation (yet). ] So these Windows rules are pretty complicated, and neither match the Unix ] rules nor provide semantics natural for the native filesystem. That makes ] them hard to explain to Unix *or* Windows users. Nevertheless, they've ] worked fine for years, and in isolation there's no compelling reason to ] change them. ] However, that was before the MacOSX HFS+ and Cygwin ports arrived. They ] also have case-preserving case-insensitive filesystems, but the people doing ] the ports despised the Windows rules. Indeed, a patch to make HFS+ act like ] Unix for imports got past a reviewer and into the code base, which ] incidentally made Cygwin also act like Unix (but this met the unbounded ] approval of the Cygwin folks, so they sure didn't complain -- they had ] patches of their own pending to do this, but the reviewer for those balked). ] ] At a higher level, we want to keep Python consistent, and I in particular ] want Python to do the same thing on *all* platforms with case-preserving ] case-insensitive filesystems. Guido too, but he's so sick of this argument ] don't ask him to confirm that <0.9 wink>. What are you thinking about the 'unfortunate / OpenVMS' group ? Hey, it could be worse, could be 'brrrrrrrrrr'... ] The proposed new semantics for the lower left box: ] ] A. If the PYTHONCASEOK envar exists, same as before: silently accept ] the first case-insensitive match of any kind; raise ImportError if ] none found. ] ] B. Else search sys.path for the first case-sensitive match; raise ] ImportError if none found. ] ] #B is the same rule as is used on Unix, so this will improve cross-platform ] portability. That's good. #B is also the rule the Mac and Cygwin folks ] want (and wanted enough to implement themselves, multiple times, which is a ] powerful argument in PythonLand). It can't cause any existing ] non-exceptional Windows import to fail, because any existing non-exceptional ] Windows import finds a case-sensitive match first in the path -- and it ] still will. An exceptional Windows import currently blows up with a ] NameError or ImportError, in which latter case it still will, or in which ] former case will continue searching, and either succeed or blow up with an ] ImportError. ] ] #A is needed to cater to case-destroying filesystems mounted on Windows, and ] *may* also be used by people so enamored of "natural" Windows behavior that ] they're willing to set an envar to get it. That's their problem . I ] don't intend to implement #A for Unix too, but that's just because I'm not ] clear on how I *could* do so efficiently (I'm not going to slow imports ] under Unix just for theoretical purity). ] ] The potential damage is here: #2 (matching on ALLCAPS.PY) is proposed to be ] dropped. Case-destroying filesystems are a vanishing breed, and support for ] them is ugly. We're already supporting (and will continue to support) ] PYTHONCASEOK for their benefit, but they don't deserve multiple hacks in ] 2001. Would using unique names be an acceptable workaround? ] Flame at will. ] ] or-flame-at-tim-your-choice-ly y'rs - tim No flame intended. Not at will and not at tim. >-- > >Cameron Laird >Business: http://www.Phaseit.net >Personal: http://starbase.neosoft.com/~claird/home.html -- Uwe Zessin From skip at mojam.com Sun Feb 18 19:07:40 2001 From: skip at mojam.com (Skip Montanaro) Date: Sun, 18 Feb 2001 12:07:40 -0600 (CST) Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib sre.py,1.29,1.30 sre_compile.py,1.35,1.36 sre_parse.py,1.43,1.44 sre_constants.py,1.26,1.27 In-Reply-To: References: Message-ID: <14992.3948.171057.408517@beluga.mojam.com> Fredrik> - removed __all__ cruft from internal modules (sorry, skip) No need to apologize to me. __all__ was proposed and nobody started implementing it, so I took it on. As I get further into it I'm less convinced that it's the right way to go. It buys you a fairly small increase in "comfort level" with a fairly large cost. Skip From mal at lemburg.com Sun Feb 18 20:30:30 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Sun, 18 Feb 2001 20:30:30 +0100 Subject: [Python-Dev] Activating SF mailing lists for PEP discussion. Message-ID: <3A9022D6.D60BE01@lemburg.com> Ping just recently posted a request here to discuss the iterator PEP on a yahoogroups mailing list. Since the move of eGroups under the Yahoo umbrella, joining those lists requires signing up with Yahoo -- with all strings attached. I don't know when they started this feature, but SourceForge now offers Mailman lists for the various projects. Wouldn't it be much simpler to open a mailing list for each PEP (possible on request only) ? That way, the archives would be kept in a cenral place and also in reach for other developers who are interested in the background discussions about the PEPs. Also, the PEPs could reference the mailing list archives to enhance the information availability. Thoughts ? I would appreciate if one of the Python SF admins would enable the feature and set up a mailing list for PEP 234 (iterators). Thanks, -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From fdrake at acm.org Sun Feb 18 20:29:58 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Sun, 18 Feb 2001 14:29:58 -0500 (EST) Subject: [Python-Dev] Activating SF mailing lists for PEP discussion. In-Reply-To: <3A9022D6.D60BE01@lemburg.com> References: <3A9022D6.D60BE01@lemburg.com> Message-ID: <14992.8886.425297.148106@cj42289-a.reston1.va.home.com> M.-A. Lemburg writes: > Ping just recently posted a request here to discuss the iterator > PEP on a yahoogroups mailing list. Since the move of eGroups under ... > Thoughts ? > > I would appreciate if one of the Python SF admins would enable the > feature and set up a mailing list for PEP 234 (iterators). I'd be glad to set up such a list, esp. if Ping and the members of the existing list opt to migrate to it. If people don't want to migrate, there's no need to set up a new list. Ping? -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From ping at lfw.org Sun Feb 18 20:39:30 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Sun, 18 Feb 2001 11:39:30 -0800 (PST) Subject: [Python-Dev] Activating SF mailing lists for PEP discussion. In-Reply-To: <14992.8886.425297.148106@cj42289-a.reston1.va.home.com> Message-ID: On Sun, 18 Feb 2001, Fred L. Drake, Jr. wrote: > M.-A. Lemburg writes: > > I would appreciate if one of the Python SF admins would enable the > > feature and set up a mailing list for PEP 234 (iterators). > > I'd be glad to set up such a list, esp. if Ping and the members of > the existing list opt to migrate to it. If people don't want to > migrate, there's no need to set up a new list. > Ping? Sure, that's fine. I had my reservations about using yahoogroups too, but since Greg had already established a list there i didn't want to duplicate his work. But i definitely agree that mailman is a better option. I've already forwarded copies of everyone's messages to yahoogroups, but after the new list is up i can do it again. -- ?!ng From martin at loewis.home.cs.tu-berlin.de Sun Feb 18 21:57:29 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Sun, 18 Feb 2001 21:57:29 +0100 Subject: [Python-Dev] Activating SF mailing lists for PEP discussion. Message-ID: <200102182057.f1IKvTB00992@mira.informatik.hu-berlin.de> > Wouldn't it be much simpler to open a mailing list for each PEP > (possible on request only) ? That was my first thought as well. The Python SF project does not currently use mailing lists because there was no need, but PEP discussion seems to be exactly the right usage of the feature. Regards, Martin From fdrake at acm.org Mon Feb 19 07:06:05 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Mon, 19 Feb 2001 01:06:05 -0500 (EST) Subject: [Python-Dev] Activating SF mailing lists for PEP discussion. In-Reply-To: References: <14992.8886.425297.148106@cj42289-a.reston1.va.home.com> Message-ID: <14992.47053.305380.752501@cj42289-a.reston1.va.home.com> Ka-Ping Yee writes: > Sure, that's fine. I had my reservations about using yahoogroups > too, but since Greg had already established a list there i didn't > want to duplicate his work. But i definitely agree that mailman > is a better option. I've just submitted the list-creation form for python-iterators at lists.sourceforge.net; I'll set you up as admin for the list once it exists (they say it takes 6-24 hours). -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From MarkH at ActiveState.com Mon Feb 19 10:38:24 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Mon, 19 Feb 2001 20:38:24 +1100 Subject: [Python-Dev] Modulefinder? In-Reply-To: <02be01c09803$23fbc400$e000a8c0@thomasnotebook> Message-ID: [Thomas] > Who is maintaining freeze/Modulefinder? > > I have some issues I would like to discuss... [long silence] I guess this make it you then ;-) I wouldn't mind seeing this move into distutils as a module others could draw on. For example, "freeze" could be modifed by the next person game enough to touch it to reference the module directly in the distutils package? It keeps the highly useful module alive, and makes "ownership" more obvious - whoever owns distutils also gets this baggage Mark. From jack at oratrix.nl Mon Feb 19 12:20:21 2001 From: jack at oratrix.nl (Jack Jansen) Date: Mon, 19 Feb 2001 12:20:21 +0100 Subject: [Python-Dev] Demo/embed/import.c Message-ID: <20010219112022.9721F371690@snelboot.oratrix.nl> Can I request that the new file Demo/embed/import.c be renamed? The name clashes with the import.c we all know and love, and the setup of things under CodeWarrior on the Mac is such that it will search for sourcefiles recursively from the root of the Python sourcefolder. I can fix this, of course, but it's a lot of work... -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | ++++ see http://www.xs4all.nl/~tank/ ++++ From thomas.heller at ion-tof.com Mon Feb 19 14:46:54 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Mon, 19 Feb 2001 14:46:54 +0100 Subject: [Python-Dev] Modulefinder? References: Message-ID: <00a401c09a7a$6d2060e0$e000a8c0@thomasnotebook> > [Thomas] > > Who is maintaining freeze/Modulefinder? > > > > I have some issues I would like to discuss... > > [long silence] > > I guess this make it you then ;-) > That's not what I wanted to hear ;-), but anyway, since you answered, I assume you have something to do with it. > I wouldn't mind seeing this move into distutils as a module others could > draw on. For example, "freeze" could be modifed by the next person game > enough to touch it to reference the module directly in the distutils > package? > > It keeps the highly useful module alive, and makes "ownership" more > obvious - whoever owns distutils also gets this baggage Sounds good, but currently I would like to concentrate an technical rather than administrative details. The following are the ideas: 1. Modulefinder does not handle cases where packages export names referring to functions or variables, rather than modules. Maybe the scan_code method, which looks for IMPORT opcode, could be extended to handle STORE_NAME opcodes which are not preceeded by IMPORT opcodes. 2. Modulefinder uses imp.find_module to find modules, and partly catches ImportErrors. imp.find_module can also raise NameErrors on windows, if the case does not fit. They should be catched. 3. Weird idea (?): Modulefinder could try instead of only scanning the opcodes to actually _import_ modules (at least extension modules, otherwise it will not find _any_ dependencies). Thomas From fdrake at users.sourceforge.net Mon Feb 19 17:50:52 2001 From: fdrake at users.sourceforge.net (Fred L. Drake) Date: Mon, 19 Feb 2001 08:50:52 -0800 Subject: [Python-Dev] [development doc updates] Message-ID: The development version of the documentation has been updated: http://python.sourceforge.net/devel-docs/ From jeremy at alum.mit.edu Mon Feb 19 21:18:03 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 19 Feb 2001 15:18:03 -0500 (EST) Subject: [Python-Dev] Windows/Cygwin/MacOSX import (was RE: python-dev summary, 2001-02-01 - 2001-02-15) In-Reply-To: References: Message-ID: <14993.32635.85544.343209@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TP" == Tim Peters writes: TP> [Michael Hudson] >> ... >> * Imports on case-insensitive file systems * >> >> There was quite some discussion about how to handle imports on a >> case-insensitive file system (eg. on Windows). I didn't follow >> the details, but Tim Peters is on the case (sorry), so I'm >> confident it will get sorted out. TP> You can be sure the whitespace will be consistent, anyway TP> . TP> OK, this one sucks. It should really have gotten a PEP, but it TP> cropped up too late in the release cycle and it can't be delayed TP> (see below). It would be good to capture this in an informational PEP that just describes what the policy is and why. If nothing else, it could be a copy of Tim's message immortalized with a PEP number. Jeremy From moshez at zadka.site.co.il Tue Feb 20 06:43:41 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Tue, 20 Feb 2001 07:43:41 +0200 (IST) Subject: [Python-Dev] Demos are out of Data: Requesting Permission to Change Message-ID: <20010220054341.C4A93A840@darjeeling.zadka.site.co.il> Random example: Demo/scripts/pi.py: # Use int(d) to avoid a trailing L after each digit Would anyone have a problem if I just went and checked in updates to the demos? Putting it as a patch on SF seems like needless beuracracy. -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez at debian.org |http://www.{python,debian,gnu}.org From MarkH at ActiveState.com Tue Feb 20 13:12:23 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Tue, 20 Feb 2001 23:12:23 +1100 Subject: [Python-Dev] Those import related syntax errors again... Message-ID: Hi all, I'm a little confused by the following exception: File "f:\src\python-cvs\xpcom\server\policy.py", line 18, in ? from xpcom import xpcom_consts, _xpcom, client, nsError, ServerException, COMException exceptions.SyntaxError: BuildInterfaceInfo: exec or 'import *' makes names ambiguous in nested scope (__init__.py, line 71) This sounds alot like Tim's question on this a while ago, and from all accounts this had been resolved (http://mail.python.org/pipermail/python-dev/2001-February/012456.html) In that mail, Jeremy writes: -- quote -- > from Percolator import Percolator > > in a class definition. That smells like a bug, not a debatable design > choice. Percolator has "from x import *" code. This is what is causing the exception. I think it has already been fixed in CVS though, so should work again. -- end quote -- However, Tim replied saying that it still didn't work for him. There was never a followup saying "it does now". In this case, the module being imported from does _not_ use "from module import *" at all, but is a parent package. The only name referenced by the __init__ function is "ServerException", and that is a simple class. The only "import *" I can track is via the name "client", which is itself a package and does the "import *" about 3 modules deep. Any clues? Thanks, Mark. From thomas at xs4all.net Tue Feb 20 13:30:45 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 20 Feb 2001 13:30:45 +0100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: ; from MarkH@ActiveState.com on Tue, Feb 20, 2001 at 11:12:23PM +1100 References: Message-ID: <20010220133045.C13911@xs4all.nl> On Tue, Feb 20, 2001 at 11:12:23PM +1100, Mark Hammond wrote: > Hi all, > I'm a little confused by the following exception: > File "f:\src\python-cvs\xpcom\server\policy.py", line 18, in ? > from xpcom import xpcom_consts, _xpcom, client, nsError, > ServerException, COMException > exceptions.SyntaxError: BuildInterfaceInfo: exec or 'import *' makes names > ambiguous in nested scope (__init__.py, line 71) [ However, no 'from foo import *' to be found, except at module level ] > Any clues? I don't have the xpcom package, so I can't check myself, but have you considered 'exec' as well as 'from foo import *' ? -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From MarkH at ActiveState.com Tue Feb 20 13:42:09 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Tue, 20 Feb 2001 23:42:09 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010220133045.C13911@xs4all.nl> Message-ID: [Thomas] > I don't have the xpcom package, so I can't check myself, As of the last 24 hours, it sits in the Mozilla CVS tree - extensions/python/xpcom :) > but have you considered 'exec' as well as 'from foo import *' ? exec appears exactly once, in a function in the "client" sub-package. Mark. From jeremy at alum.mit.edu Tue Feb 20 15:48:41 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 20 Feb 2001 09:48:41 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: References: <20010220133045.C13911@xs4all.nl> Message-ID: <14994.33737.132255.466570@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "MH" == Mark Hammond writes: MH> [Thomas] >> I don't have the xpcom package, so I can't check myself, MH> As of the last 24 hours, it sits in the Mozilla CVS tree - MH> extensions/python/xpcom :) Don't know where to find that :-) >> but have you considered 'exec' as well as 'from foo import *' ? MH> exec appears exactly once, in a function in the "client" MH> sub-package. Does the function that contains the exec also contain another function or lambda? If it does and the contained function has references to non-local variables, the compiler will complain. The exception should include the line number of the first line of the function body that contains the import * or exec. Jeremy From guido at digicool.com Tue Feb 20 16:03:59 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 20 Feb 2001 10:03:59 -0500 Subject: [Python-Dev] Demos are out of Date: Requesting Permission to Change In-Reply-To: Your message of "Tue, 20 Feb 2001 07:43:41 +0200." <20010220054341.C4A93A840@darjeeling.zadka.site.co.il> References: <20010220054341.C4A93A840@darjeeling.zadka.site.co.il> Message-ID: <200102201503.KAA28281@cj20424-a.reston1.va.home.com> > Random example: > > Demo/scripts/pi.py: > # Use int(d) to avoid a trailing L after each digit > > Would anyone have a problem if I just went and checked in updates > to the demos? Putting it as a patch on SF seems like needless beuracracy. Sure, go ahead. I've fixed your subject: I stared puzzledly at "Demos are out of Data" for quite a while before I realized you meant out of date! --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at digicool.com Tue Feb 20 17:05:15 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 11:05:15 -0500 Subject: [Python-Dev] Demo/embed/import.c References: <20010219112022.9721F371690@snelboot.oratrix.nl> Message-ID: <14994.38331.347106.734329@anthem.wooz.org> >>>>> "JJ" == Jack Jansen writes: JJ> Can I request that the new file Demo/embed/import.c be JJ> renamed? The name clashes with the import.c we all know and JJ> love, and the setup of things under CodeWarrior on the Mac is JJ> such that it will search for sourcefiles recursively from the JJ> root of the Python sourcefolder. JJ> I can fix this, of course, but it's a lot of work... I'll fix this, but I'm not going to preserve the CVS history. 1) the file is too new to have any significant history, 2) doing the repository dance on SF sucks. I'll call the file importexc.c since it imports exceptions. -Barry From barry at digicool.com Tue Feb 20 18:49:49 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 12:49:49 -0500 Subject: [Python-Dev] Demo/embed/import.c References: <20010219112022.9721F371690@snelboot.oratrix.nl> <14994.38331.347106.734329@anthem.wooz.org> Message-ID: <14994.44605.599157.471020@anthem.wooz.org> >>>>> "BAW" == Barry A Warsaw writes: BAW> I'll fix this, but I'm not going to preserve the CVS history. BAW> 1) the file is too new to have any significant history, 2) BAW> doing the repository dance on SF sucks. BAW> I'll call the file importexc.c since it imports exceptions. I fixed this, but some of the programs now core dump. I need to cvs update and rebuild everything and then figure out why it's coring. Then I'll check things in. -Barry From barry at digicool.com Tue Feb 20 21:22:32 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 15:22:32 -0500 Subject: [Python-Dev] Update to PEP 232 Message-ID: <14994.53768.767065.272158@anthem.wooz.org> After some internal discussions amongst the Pythonlabbers, we've had to make some updates to PEP 232, Function Attributes. Attached is the complete current PEP draft, also available at http://python.sourceforge.net/peps/pep-0232.html The PEP has been moved back to Draft status, but will be Accepted and Finalized for Python 2.1. It will also be propagated forward for Python 2.2 for the next step in implementation. -Barry -------------------- snip snip -------------------- PEP: 232 Title: Function Attributes Version: $Revision: 1.6 $ Author: barry at digicool.com (Barry A. Warsaw) Status: Draft Type: Standards Track Created: 02-Dec-2000 Python-Version: 2.1 / 2.2 Post-History: 20-Feb-2001 Introduction This PEP describes an extension to Python, adding attribute dictionaries to functions and methods. This PEP tracks the status and ownership of this feature. It contains a description of the feature and outlines changes necessary to support the feature. This PEP summarizes discussions held in mailing list forums, and provides URLs for further information, where appropriate. The CVS revision history of this file contains the definitive historical record. Background Functions already have a number of attributes, some of which are writable, e.g. func_doc, a.k.a. func.__doc__. func_doc has the interesting property that there is special syntax in function (and method) definitions for implicitly setting the attribute. This convenience has been exploited over and over again, overloading docstrings with additional semantics. For example, John Aycock has written a system where docstrings are used to define parsing rules[1]. Zope's ZPublisher ORB[2] uses docstrings to signal "publishable" methods, i.e. methods that can be called through the web. And Tim Peters has developed a system called doctest[3], where docstrings actually contain unit tests. The problem with this approach is that the overloaded semantics may conflict with each other. For example, if we wanted to add a doctest unit test to a Zope method that should not be publishable through the web. Proposal This proposal adds a new dictionary to function objects, called func_dict (a.k.a. __dict__). This dictionary can be set and get using ordinary attribute set and get syntax. Methods also gain `getter' syntax, and they currently access the attribute through the dictionary of the underlying function object. It is not possible to set attributes on bound or unbound methods, except by doing so explicitly on the underlying function object. See the `Future Directions' discussion below for approaches in subsequent versions of Python. A function object's __dict__ can also be set, but only to a dictionary object (i.e. setting __dict__ to UserDict raises a TypeError). Examples Here are some examples of what you can do with this feature. def a(): pass a.publish = 1 a.unittest = '''...''' if a.publish: print a() if hasattr(a, 'unittest'): testframework.execute(a.unittest) class C: def a(self): 'just a docstring' a.publish = 1 c = C() if c.a.publish: publish(c.a()) Other Uses Paul Prescod enumerated a bunch of other uses: http://mail.python.org/pipermail/python-dev/2000-April/003364.html Future Directions - A previous version of this PEP (and the accompanying implementation) allowed for both setter and getter of attributes on unbound methods, and only getter on bound methods. A number of problems were discovered with this policy. Because method attributes were stored in the underlying function, this caused several potentially surprising results: class C: def a(self): pass c1 = C() c2 = C() c1.a.publish = 1 # c2.a.publish would now be == 1 also! Because a change to `a' bound c1 also caused a change to `a' bound to c2, setting of attributes on bound methods was disallowed. However, even allowing setting of attributes on unbound methods has its ambiguities: class D(C): pass class E(C): pass D.a.publish = 1 # E.a.publish would now be == 1 also! For this reason, the current PEP disallows setting attributes on either bound or unbound methods, but does allow for getting attributes on either -- both return the attribute value on the underlying function object. The proposal for Python 2.2 is to implement setting (bound or unbound) method attributes by setting attributes on the instance or class, using special naming conventions. I.e. class C: def a(self): pass C.a.publish = 1 C.__a_publish__ == 1 # true c = C() c.a.publish = 2 c.__a_publish__ == 2 # true d = C() d.__a_publish__ == 1 # true Here, a lookup on the instance would look to the instance's dictionary first, followed by a lookup on the class's dictionary, and finally a lookup on the function object's dictionary. - Currently, Python supports function attributes only on Python functions (i.e. those that are written in Python, not those that are built-in). Should it be worthwhile, a separate patch can be crafted that will add function attributes to built-ins. - __doc__ is the only function attribute that currently has syntactic support for conveniently setting. It may be worthwhile to eventually enhance the language for supporting easy function attribute setting. Here are some syntaxes suggested by PEP reviewers: def a { 'publish' : 1, 'unittest': '''...''', } (args): # ... def a(args): """The usual docstring.""" {'publish' : 1, 'unittest': '''...''', # etc. } It isn't currently clear if special syntax is necessary or desirable. Dissenting Opinion When this was discussed on the python-dev mailing list in April 2000, a number of dissenting opinions were voiced. For completeness, the discussion thread starts here: http://mail.python.org/pipermail/python-dev/2000-April/003361.html The dissenting arguments appear to fall under the following categories: - no clear purpose (what does it buy you?) - other ways to do it (e.g. mappings as class attributes) - useless until syntactic support is included Countering some of these arguments is the observation that with vanilla Python 2.0, __doc__ can in fact be set to any type of object, so some semblance of writable function attributes are already feasible. But that approach is yet another corruption of __doc__. And while it is of course possible to add mappings to class objects (or in the case of function attributes, to the function's module), it is more difficult and less obvious how to extract the attribute values for inspection. Finally, it may be desirable to add syntactic support, much the same way that __doc__ syntactic support exists. This can be considered separately from the ability to actually set and get function attributes. Reference Implementation The reference implementation is available on SourceForge as a patch against the Python CVS tree (patch #103123). This patch doesn't include the regrtest module and output file. Those are available upon request. http://sourceforge.net/patch/?func=detailpatch&patch_id=103123&group_id=5470 This patch has been applied and will become part of Python 2.1. References [1] Aycock, "Compiling Little Languages in Python", http://www.foretec.com/python/workshops/1998-11/proceedings/papers/aycock-little/aycock-little.html [2] http://classic.zope.org:8080/Documentation/Reference/ORB [3] ftp://ftp.python.org/pub/python/contrib-09-Dec-1999/System/doctest.py Copyright This document has been placed in the Public Domain. Local Variables: mode: indented-text indent-tabs-mode: nil End: From barry at digicool.com Tue Feb 20 21:58:43 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 15:58:43 -0500 Subject: [Python-Dev] Embedding demos are broken Message-ID: <14994.55939.514084.356997@anthem.wooz.org> Something changed recently, and now the Demo/embed programs are broken, e.g. % ./loop pass 2 Could not find platform independent libraries Could not find platform dependent libraries Consider setting $PYTHONHOME to [: ] 'import site' failed; use -v for traceback Segmentation fault (core dumped) The crash is happening in the second call to init_exceptions() (gdb) where #0 PyModule_GetDict (m=0x0) at Objects/moduleobject.c:40 #1 0x8075ea8 in init_exceptions () at Python/exceptions.c:1058 #2 0x8051880 in Py_Initialize () at Python/pythonrun.c:147 #3 0x80516db in main (argc=3, argv=0xbffffa34) at loop.c:28 because the attempt to import __builtin__ returns NULL. I don't have time right now to look any deeper, but I suspect that the crash may be due to changes in the semantics of PyImport_ImportModule() which now goes through __import__. I'm posting this in case someone with spare cycles can look at it. -Barry From guido at digicool.com Tue Feb 20 22:40:07 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 20 Feb 2001 16:40:07 -0500 Subject: [Python-Dev] Embedding demos are broken In-Reply-To: Your message of "Tue, 20 Feb 2001 15:58:43 EST." <14994.55939.514084.356997@anthem.wooz.org> References: <14994.55939.514084.356997@anthem.wooz.org> Message-ID: <200102202140.QAA06446@cj20424-a.reston1.va.home.com> > Something changed recently, and now the Demo/embed programs are > broken, e.g. > > % ./loop pass 2 > Could not find platform independent libraries > Could not find platform dependent libraries > Consider setting $PYTHONHOME to [: ] > 'import site' failed; use -v for traceback > Segmentation fault (core dumped) > > The crash is happening in the second call to init_exceptions() > > (gdb) where > #0 PyModule_GetDict (m=0x0) at Objects/moduleobject.c:40 > #1 0x8075ea8 in init_exceptions () at Python/exceptions.c:1058 > #2 0x8051880 in Py_Initialize () at Python/pythonrun.c:147 > #3 0x80516db in main (argc=3, argv=0xbffffa34) at loop.c:28 > > because the attempt to import __builtin__ returns NULL. I don't have > time right now to look any deeper, but I suspect that the crash may be > due to changes in the semantics of PyImport_ImportModule() which now > goes through __import__. > > I'm posting this in case someone with spare cycles can look at it. > > -Barry This was probably broken since PyImport_Import() was introduced in 1997! The code in PyImport_Import() tried to save itself a bit of work and save the __builtin__ module in a static variable. But this doesn't work across Py_Finalise()/Py_Initialize()! It also doesn't work when using multiple interpreter states created with PyInterpreterState_New(). So I'm ripping out this code. Looks like it's passing the test suite so I'm checking in the patch. It looks like we need a much more serious test suite for multiple interpreters and repeatedly initializing! --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at digicool.com Tue Feb 20 22:55:58 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 16:55:58 -0500 Subject: [Python-Dev] Embedding demos are broken References: <14994.55939.514084.356997@anthem.wooz.org> <200102202140.QAA06446@cj20424-a.reston1.va.home.com> Message-ID: <14994.59374.979694.249817@anthem.wooz.org> >>>>> "GvR" == Guido van Rossum writes: GvR> This was probably broken since PyImport_Import() was GvR> introduced in 1997! Odd. It all worked the last time I touched those files a couple of weeks ago (when I was testing those progs against Insure). I'll do a CVS update and check again. Thanks. -Barry From guido at digicool.com Tue Feb 20 23:03:46 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 20 Feb 2001 17:03:46 -0500 Subject: [Python-Dev] Embedding demos are broken In-Reply-To: Your message of "Tue, 20 Feb 2001 16:55:58 EST." <14994.59374.979694.249817@anthem.wooz.org> References: <14994.55939.514084.356997@anthem.wooz.org> <200102202140.QAA06446@cj20424-a.reston1.va.home.com> <14994.59374.979694.249817@anthem.wooz.org> Message-ID: <200102202203.RAA06667@cj20424-a.reston1.va.home.com> > >>>>> "GvR" == Guido van Rossum writes: > > GvR> This was probably broken since PyImport_Import() was > GvR> introduced in 1997! > > Odd. It all worked the last time I touched those files a couple of > weeks ago (when I was testing those progs against Insure). That's because then PyImport_ImportModule() wasn't synonymous with PyImport_Import(). > I'll do a CVS update and check again. Thanks. I'm sure it'll work. --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at digicool.com Tue Feb 20 23:11:57 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Tue, 20 Feb 2001 17:11:57 -0500 Subject: [Python-Dev] Embedding demos are broken References: <14994.55939.514084.356997@anthem.wooz.org> <200102202140.QAA06446@cj20424-a.reston1.va.home.com> <14994.59374.979694.249817@anthem.wooz.org> Message-ID: <14994.60333.915783.456876@anthem.wooz.org> >>>>> "BAW" == Barry A Warsaw writes: BAW> I'll do a CVS update and check again. Thanks. Works now, thanks. From MarkH at ActiveState.com Tue Feb 20 23:44:28 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Wed, 21 Feb 2001 09:44:28 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14994.33737.132255.466570@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: > MH> As of the last 24 hours, it sits in the Mozilla CVS tree - > MH> extensions/python/xpcom :) > > Don't know where to find that :-) I could tell you if you like :) > >> but have you considered 'exec' as well as 'from foo import *' ? > > MH> exec appears exactly once, in a function in the "client" > MH> sub-package. > > Does the function that contains the exec also contain another function > or lambda? If it does and the contained function has references to > non-local variables, the compiler will complain. It appears this is the problem. The fact that only "__init__.py" was listed threw me - I have a few of them :) *sigh* - this is a real shame. IMO, we can't continue to break existing code, even if it is good for me! People are going to get mighty annoyed - I am. And if people on python-dev struggle with some of the new errors, the poor normal users are going to feel even more alienated. Mark. From guido at digicool.com Tue Feb 20 23:54:54 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 20 Feb 2001 17:54:54 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 09:44:28 +1100." References: Message-ID: <200102202254.RAA07487@cj20424-a.reston1.va.home.com> > > Does the function that contains the exec also contain another function > > or lambda? If it does and the contained function has references to > > non-local variables, the compiler will complain. > > It appears this is the problem. The fact that only "__init__.py" was listed > threw me - I have a few of them :) > > *sigh* - this is a real shame. IMO, we can't continue to break existing > code, even if it is good for me! People are going to get mighty annoyed - I > am. And if people on python-dev struggle with some of the new errors, the > poor normal users are going to feel even more alienated. Sigh indeed. We could narrow it down to only raise the error if there are nested functions or lambdas that don't reference free variables, but unfortunately most of them will reference at least some builtin e.g. str()... How about the old fallback to using straight dict lookups when this combination of features is detected? --Guido van Rossum (home page: http://www.python.org/~guido/) From pedroni at inf.ethz.ch Wed Feb 21 02:22:38 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 02:22:38 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102202254.RAA07487@cj20424-a.reston1.va.home.com> Message-ID: <006501c09ba4$c84857e0$605821c0@newmexico> Hello. > > > Does the function that contains the exec also contain another function > > > or lambda? If it does and the contained function has references to > > > non-local variables, the compiler will complain. > > > > It appears this is the problem. The fact that only "__init__.py" was listed > > threw me - I have a few of them :) > > > > *sigh* - this is a real shame. IMO, we can't continue to break existing > > code, even if it is good for me! People are going to get mighty annoyed - I > > am. And if people on python-dev struggle with some of the new errors, the > > poor normal users are going to feel even more alienated. > > Sigh indeed. We could narrow it down to only raise the error if there > are nested functions or lambdas that don't reference free variables, > but unfortunately most of them will reference at least some builtin > e.g. str()... > > How about the old fallback to using straight dict lookups when this > combination of features is detected? I'm posting an opinion on this subject because I'm implementing nested scopes in jython. It seems that we really should avoid breaking code using import * and exec, and to obtain this - I agree - the only way is to fall back to some straight dictionary lookup, when both import or exec and nested scopes are there But doing this AFAIK related to actual python nested scope impl and what I'm doing on jython side is quite messy, because we will need to keep around "chained" closures as entire dictionaries, because we don't know if an exec or import will hide some variable from an outer level, or add a new variable that then cannot be interpreted as a global one in nested scopes. This is IMO too much heavyweight. Another way is to use special rules (similar to those for class defs), e.g. having y=3 def f(): exec "y=2" def g(): return y return g() print f() # print 3. Is that confusing for users? maybe they will more naturally expect 2 as outcome (given nested scopes). The last possibility (but I know this one has been somehow discarded) is to have scoping only if explicitly declared; I imagine something like y=3 def f(): let y exec "y=2" def g(): return y return g() print f() # print 2. Issues with this: - with implicit scoping we naturally obtain that nested func defs can call themself recursively: * we can require a let for this too * we can introduce "horrible" things like 'defrec' or 'deflet' * we can have def imply a let: this breaks def get_str(): def str(v): return "str: "+str(v) return str but nested scopes as actually implemented already break that. - with this approach inner scopes can change the value of outer scope vars: this was considered a non-feature... - what's the gain with this approach? if we consider code like this: def f(str): # eg str = "y=z" from foo import * def g(): exec str return y return g without explicit 'let' decls if we want to compile this and not just say "you can't do that" the closure of g should be constructed out of the entire runtime namespace of f. With explicit 'let's in this case we would produce just the old code and semantic. If some 'let' would be added to f, we would know what part of the namespace of f should be used to construct the closure of g. In absence of import* and exec we could use the current fast approach to implement nested scopes, if they are there we would know what vars should be stored in cells and passed down to inner scopes. [We could have special locals dicts that can contain direct values or cells, and that would do the right indirect get and set for the cell-case. These dict could also be possibly returned by "locals()" and that would be the way to implement exec "spam", just equivalently as exec "spam" in globals(),locals(). import * would have just the assignement semantic. ] Very likely I'm missing something, but from my "external" viewpoint I would have preferred such solution. IMO maybe it would be good to think about this, because differently as expected implicit scoping has consequences that we would better avoid. Is too late for that (having feature freeze)? regards, Samuele Pedroni. From skip at mojam.com Wed Feb 21 03:00:42 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 20 Feb 2001 20:00:42 -0600 (CST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102202254.RAA07487@cj20424-a.reston1.va.home.com> References: <200102202254.RAA07487@cj20424-a.reston1.va.home.com> Message-ID: <14995.8522.253084.230222@beluga.mojam.com> Guido> Sigh indeed.... Guido> How about the old fallback to using straight dict lookups when Guido> this combination of features is detected? This probably won't be a very popular suggestion, but how about pulling nested scopes (I assume they are at the root of the problem) until this can be solved cleanly? Skip From guido at digicool.com Wed Feb 21 03:53:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 20 Feb 2001 21:53:03 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 02:22:38 +0100." <006501c09ba4$c84857e0$605821c0@newmexico> References: <200102202254.RAA07487@cj20424-a.reston1.va.home.com> <006501c09ba4$c84857e0$605821c0@newmexico> Message-ID: <200102210253.VAA08462@cj20424-a.reston1.va.home.com> > > How about the old fallback to using straight dict lookups when this > > combination of features is detected? > > I'm posting an opinion on this subject because I'm implementing > nested scopes in jython. > > It seems that we really should avoid breaking code using import * > and exec, and to obtain this - I agree - the only way is to fall > back to some straight dictionary lookup, when both import or exec > and nested scopes are there > > But doing this AFAIK related to actual python nested scope impl and > what I'm doing on jython side is quite messy, because we will need > to keep around "chained" closures as entire dictionaries, because we > don't know if an exec or import will hide some variable from an > outer level, or add a new variable that then cannot be interpreted > as a global one in nested scopes. This is IMO too much heavyweight. > > Another way is to use special rules > (similar to those for class defs), e.g. having > > > y=3 > def f(): > exec "y=2" > def g(): > return y > return g() > > print f() > > > # print 3. > > Is that confusing for users? maybe they will more naturally expect 2 > as outcome (given nested scopes). This seems the best compromise to me. It will lead to the least broken code, because this is the behavior that we had before nested scopes! It is also quite easy to implement given the current implementation, I believe. Maybe we could introduce a warning rather than an error for this situation though, because even if this behavior is clearly documented, it will still be confusing to some, so it is better if we outlaw it in some future version. --Guido van Rossum (home page: http://www.python.org/~guido/) From MarkH at ActiveState.com Wed Feb 21 03:58:18 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Wed, 21 Feb 2001 13:58:18 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14995.8522.253084.230222@beluga.mojam.com> Message-ID: > This probably won't be a very popular suggestion, but how about pulling > nested scopes (I assume they are at the root of the problem) > until this can be solved cleanly? Agreed. While I think nested scopes are kinda cool, I have lived without them, and really without missing them, for years. At the moment the cure appears worse then the symptoms in at least a few cases. If nothing else, it compromises the elegant simplicity of Python that drew me here in the first place! Assuming that people really _do_ want this feature, IMO the bar should be raised so there are _zero_ backward compatibility issues. Mark. From MarkH at ActiveState.com Wed Feb 21 04:08:01 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Wed, 21 Feb 2001 14:08:01 +1100 Subject: [Python-Dev] Modulefinder? In-Reply-To: <00a401c09a7a$6d2060e0$e000a8c0@thomasnotebook> Message-ID: [Thomas H] > That's not what I wanted to hear ;-), but anyway, since you > answered, I assume you have something to do with it. I stuck my finger in it once :) > 1. Modulefinder does not handle cases where packages export names > referring to functions or variables, rather than modules. > Maybe the scan_code method, which looks for IMPORT opcode, > could be extended to handle STORE_NAME opcodes which are not > preceeded by IMPORT opcodes. > > 2. Modulefinder uses imp.find_module to find modules, and > partly catches ImportErrors. imp.find_module can also > raise NameErrors on windows, if the case does not fit. > They should be catched. They both sound fine to me. > 3. Weird idea (?): Modulefinder could try instead of only > scanning the opcodes to actually _import_ modules (at least > extension modules, otherwise it will not find _any_ dependencies). There was some reluctance to do this for freeze, and hence Modulefinder was born. I agree it may make sense in some cases to do this, but it shouldn't be a default action. Mark. From akuchlin at cnri.reston.va.us Wed Feb 21 04:29:36 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Tue, 20 Feb 2001 22:29:36 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: ; from MarkH@ActiveState.com on Wed, Feb 21, 2001 at 01:58:18PM +1100 References: <14995.8522.253084.230222@beluga.mojam.com> Message-ID: <20010220222936.A2477@newcnri.cnri.reston.va.us> On Wed, Feb 21, 2001 at 01:58:18PM +1100, Mark Hammond wrote: >Assuming that people really _do_ want this feature, IMO the bar should be >raised so there are _zero_ backward compatibility issues. Even at the cost of additional implementation complexity? At the cost of having to learn "scopes are nested, unless you do these two things in which case they're not"? Let's not waffle. If nested scopes are worth doing, they're worth breaking code. Either leave exec and from..import illegal, or back out nested scopes, or think of some better solution, but let's not introduce complicated backward compatibility hacks. --amk From MarkH at ActiveState.com Wed Feb 21 05:11:46 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Wed, 21 Feb 2001 15:11:46 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010220222936.A2477@newcnri.cnri.reston.va.us> Message-ID: > Even at the cost of additional implementation complexity? I can only assume you are serious. IMO, absolutely! > Let's not waffle. Agreed. IMO we are starting to waffle the minute we ignore backwards compatibility. If a new feature can't be added without breaking code that was not previously documented as illegal, then IMO it is simply a non-starter until Py3k. Indeed, I seem to recall many worthwhile features being added to the Py3k bit-bucket for exactly that reason. Mark. From jeremy at alum.mit.edu Wed Feb 21 05:22:16 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 20 Feb 2001 23:22:16 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14995.8522.253084.230222@beluga.mojam.com> References: <200102202254.RAA07487@cj20424-a.reston1.va.home.com> <14995.8522.253084.230222@beluga.mojam.com> Message-ID: <14995.17016.98294.378337@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro writes: Guido> Sigh indeed.... It sounds like the real source of frusteration was the confusing error message. I'd rather fix the error message. Guido> How about the old fallback to using straight dict lookups Guido> when this combination of features is detected? Straight dict lookups isn't sufficient for most cases, because the question is one of whether to build a closure or not. def f(): from module import * def g(l): len(l) If len is not defined in f, then the compiler generates a LOAD_GLOBAL for len. If it is defined in f, then it creates a closure for g (MAKE_CLOSURE instead of MAKE_FUNCTION) generator a LOAD_DEREF for len. As far as I can tell, there's no trivial change that will make this work. SM> This probably won't be a very popular suggestion, but how about SM> pulling nested scopes (I assume they are at the root of the SM> problem) until this can be solved cleanly? Not popular with me <0.5 wink>, but only because I don't there this is a problem that can be "solved" cleanly. I think it's far from obvious what the code example above should do in the case where module defines the name len. Posters of c.l.py have suggested both alternatives as the logical choice: (1) import * is dynamic so the static scoping rule ignores the names it introduces, (2) Python is a late binding language so the name binding introduced by import * is used. Jeremy From jeremy at alum.mit.edu Wed Feb 21 05:24:40 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 20 Feb 2001 23:24:40 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010220222936.A2477@newcnri.cnri.reston.va.us> References: <14995.8522.253084.230222@beluga.mojam.com> <20010220222936.A2477@newcnri.cnri.reston.va.us> Message-ID: <14995.17160.411136.109911@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "AMK" == Andrew Kuchling writes: AMK> On Wed, Feb 21, 2001 at 01:58:18PM +1100, Mark Hammond wrote: >> Assuming that people really _do_ want this feature, IMO the bar >> should be raised so there are _zero_ backward compatibility >> issues. AMK> Even at the cost of additional implementation complexity? At AMK> the cost of having to learn "scopes are nested, unless you do AMK> these two things in which case they're not"? AMK> Let's not waffle. If nested scopes are worth doing, they're AMK> worth breaking code. Either leave exec and from..import AMK> illegal, or back out nested scopes, or think of some better AMK> solution, but let's not introduce complicated backward AMK> compatibility hacks. Well said. Jeremy From jeremy at alum.mit.edu Wed Feb 21 05:28:20 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 20 Feb 2001 23:28:20 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: References: <14995.8522.253084.230222@beluga.mojam.com> Message-ID: <14995.17380.172705.843973@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "MH" == Mark Hammond writes: >> This probably won't be a very popular suggestion, but how about >> pulling nested scopes (I assume they are at the root of the >> problem) until this can be solved cleanly? MH> Agreed. While I think nested scopes are kinda cool, I have MH> lived without them, and really without missing them, for years. MH> At the moment the cure appears worse then the symptoms in at MH> least a few cases. If nothing else, it compromises the elegant MH> simplicity of Python that drew me here in the first place! Mark, I'll buy that you're suffering at the moment, but I'm not sure why. You have a lot of code that uses 'from ... import *' inside functions. If so, that's the source of the compatibility problem. If you had a tool that listed all the code that needed to be fixed and/or you got tracebacks that highlighted the offending line rather than some import, would you still be suffering? It sounds like the problem wouldn't be much harder then multi-argument append at that point. I also disagree strongly with the argument that nested scopes compromise the elegent simplicity of Python. Did you really look at Python and say, "None of those stinking scoping rules. Let me at it." I think the new rules are different, but no more or less complex than the old ones. Jeremy From MarkH at ActiveState.com Wed Feb 21 06:27:44 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Wed, 21 Feb 2001 16:27:44 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14995.17380.172705.843973@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: [Jeremy] > I'll buy that you're suffering at the moment, but I'm not sure why. I apologize if I sounded antagonistic. > You have a lot of code that uses 'from ... import *' inside > functions. If so, that's the source of the compatibility problem. > If you had a tool that listed all the code that needed to be fixed > and/or you got tracebacks that highlighted the offending line rather > than some import, would you still be suffering? The point isn't about my suffering as such. The point is more that python-dev owns a tiny amount of the code out there, and I don't believe we should put Python's users through this. Sure - I would be happy to "upgrade" all the win32all code, no problem. I am also happy to live in the bleeding edge and take some pain that will cause. The issue is simply the user base, and giving Python a reputation of not being able to painlessly upgrade even dot revisions. > It sounds like the > problem wouldn't be much harder then multi-argument append at that > point. Yup. I changed my code in relative silence on that issue, but believe we should not have been so hasty. Now we have warnings, I believe that would have been handled slightly differently if done today. It also had existing documentation to back it. Further, I believe that issue has contributed to a "no painless upgrade" perception already existing in some people's minds. > I also disagree strongly with the argument that nested scopes > compromise the elegent simplicity of Python. Did you really look at > Python and say, "None of those stinking scoping rules. Let me at it." > I think the new rules are different, but no more or less > complex than the old ones. exec and eval take 2 dicts - there were 2 namespaces. I certainly have missed nested scopes, but instead of "let me at it", I smiled at the elegance and simplicity it buys me. I also didn't have to worry about "namespace clashes", and obscure rules. I wrote code the way I saw fit at the time, and didn't have to think about scoping rules. Even if we ignore existing code breaking, it is almost certain I would have coded the function the same way, got the syntax error, tried to work out exactly what it was complaining about, and adjust my code accordingly. Python is generally bereft of such rules, and all the more attractive for it. So I am afraid my perception remains. That said, I am not against nested scopes as Itrust the judgement of people smarter than I. However, I am against code breakage that is somehow "good for me", and suspect many other Python users are too. Just-one-more-reason-why-I-aint-the-BDFL- ly, Mark. Mark. From thomas at xs4all.net Wed Feb 21 07:47:10 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 21 Feb 2001 07:47:10 +0100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010220222936.A2477@newcnri.cnri.reston.va.us>; from akuchlin@cnri.reston.va.us on Tue, Feb 20, 2001 at 10:29:36PM -0500 References: <14995.8522.253084.230222@beluga.mojam.com> <20010220222936.A2477@newcnri.cnri.reston.va.us> Message-ID: <20010221074710.E13911@xs4all.nl> On Tue, Feb 20, 2001 at 10:29:36PM -0500, Andrew Kuchling wrote: > Let's not waffle. If nested scopes are worth doing, they're worth > breaking code. I'm sorry, but that's bull -- I mean, I disagree completely. Nested scopes *are* a nice feature, but if we can't do them without breaking code in weird ways, we shouldn't, or at least *not yet*. I am still uneasy by the restrictions seemingly created just to facilitate the implementation issues of nested scopes, but I could live with them if they had been generating warnings at least one release, preferably more. I'm probably more conservative than most people here, in that aspect, but I believe I am right in it ;) Consider the average Joe User attempting to upgrade. He has to decide whether any of his scripts suffer from the upgrade, and then has to figure out how to fix them. In a case like Mark had, he is very likely to just give up and not upgrade, cursing Python while he's doing it. Now consider a site admin (which I happen to be,) who has to make that decision for all the people on the site -- which can be tens of thousands of people. There is no way he is going to test all scripts, he is lucky to know who even *uses* Python. He can probably live with a clean error that is an obvious fix; that's part of upgrading. Some weird error that doesn't point to a fix, and a weird, inconsequential fix in the first place isn't going to make him confident in upgrading. Now consider a distribution maintainer, who has to make that decision for potentially millions, many of which are site maintainers. He is not a happy camper. I was annoyed by the socket.socket() change in 2.0, but at least we could pretend 1.6 was a real release and that there was a lot of advance warning. In this case, however, we had several instances of the 'bug' in the standard library itself, which a lot of people use as code examples. I have yet to see a book or tutorial that lists from-foo-import-* in a local scope as illegal, and I have yet to see *anything* that lists 'exec' (not 'in' something) in a local scope as illegal. Nevertheless, those two will seem to be breaking the code now. > Either leave exec and from..import illegal, or back > out nested scopes, or think of some better solution, but let's not > introduce complicated backward compatibility hacks. We already *have* complicated backward compatibility hacks, though they are masked as optimizations now. from-foo-import-* and exec are legal in a function scope as long as you don't have a nested scope that references a non-local name. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From pedroni at inf.ethz.ch Wed Feb 21 15:46:40 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 15:46:40 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211446.PAA07183@core.inf.ethz.ch> Hi. [Mark Hammond] > The point isn't about my suffering as such. The point is more that > python-dev owns a tiny amount of the code out there, and I don't believe we > should put Python's users through this. > > Sure - I would be happy to "upgrade" all the win32all code, no problem. I > am also happy to live in the bleeding edge and take some pain that will > cause. > > The issue is simply the user base, and giving Python a reputation of not > being able to painlessly upgrade even dot revisions. I agree with all this. [As I imagined explicit syntax did not catch up and would require lot of discussions.] [GvR] > > Another way is to use special rules > > (similar to those for class defs), e.g. having > > > > > > y=3 > > def f(): > > exec "y=2" > > def g(): > > return y > > return g() > > > > print f() > > > > > > # print 3. > > > > Is that confusing for users? maybe they will more naturally expect 2 > > as outcome (given nested scopes). > > This seems the best compromise to me. It will lead to the least > broken code, because this is the behavior that we had before nested > scopes! It is also quite easy to implement given the current > implementation, I believe. > > Maybe we could introduce a warning rather than an error for this > situation though, because even if this behavior is clearly documented, > it will still be confusing to some, so it is better if we outlaw it in > some future version. > Yes this can be easy to implement but more confusing situations can arise: y=3 def f(): y=9 exec "y=2" def g(): return y return y,g() print f() What should this print? the situation leads not to a canonical solution as class def scopes. or def f(): from foo import * def g(): return y return g() print f() [Mark Hammond] > > This probably won't be a very popular suggestion, but how about pulling > > nested scopes (I assume they are at the root of the problem) > > until this can be solved cleanly? > > Agreed. While I think nested scopes are kinda cool, I have lived without > them, and really without missing them, for years. At the moment the cure > appears worse then the symptoms in at least a few cases. If nothing else, > it compromises the elegant simplicity of Python that drew me here in the > first place! > > Assuming that people really _do_ want this feature, IMO the bar should be > raised so there are _zero_ backward compatibility issues. I don't say anything about pulling nested scopes (I don't think my opinion can change things in this respect) but I should insist that without explicit syntax IMO raising the bar has a too high impl cost (both performance and complexity) or creates confusion. [Andrew Kuchling] > >Assuming that people really _do_ want this feature, IMO the bar should be > >raised so there are _zero_ backward compatibility issues. > > Even at the cost of additional implementation complexity? At the cost > of having to learn "scopes are nested, unless you do these two things > in which case they're not"? > > Let's not waffle. If nested scopes are worth doing, they're worth > breaking code. Either leave exec and from..import illegal, or back > out nested scopes, or think of some better solution, but let's not > introduce complicated backward compatibility hacks. IMO breaking code would be ok if we issue warnings today and implement nested scopes issuing errors tomorrow. But this is simply a statement about principles and raised impression. IMO import * in an inner scope should end up being an error, not sure about 'exec's. We will need a final BDFL statement. regards, Samuele Pedroni. From fredrik at pythonware.com Wed Feb 21 08:48:51 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 21 Feb 2001 08:48:51 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: Message-ID: <019001c09bda$ffb6f4d0$e46940d5@hagrid> mark wrote: > Agreed. While I think nested scopes are kinda cool, I have lived without > them, and really without missing them, for years. in addition, it breaks existing code, all existing books, and several tools. doesn't sound like it really belongs in a X.1 release... maybe it should be ifdef'ed out, and not switched on by default until we reach 3.0? Cheers /F From jeremy at alum.mit.edu Wed Feb 21 15:56:40 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 09:56:40 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221074710.E13911@xs4all.nl> References: <14995.8522.253084.230222@beluga.mojam.com> <20010220222936.A2477@newcnri.cnri.reston.va.us> <20010221074710.E13911@xs4all.nl> Message-ID: <14995.55080.928806.56317@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TW" == Thomas Wouters writes: TW> On Tue, Feb 20, 2001 at 10:29:36PM -0500, Andrew Kuchling wrote: >> Let's not waffle. If nested scopes are worth doing, they're >> worth breaking code. TW> I'm sorry, but that's bull -- I mean, I disagree TW> completely. Nested scopes *are* a nice feature, but if we can't TW> do them without breaking code in weird ways, we shouldn't, or at TW> least *not yet*. I am still uneasy by the restrictions seemingly TW> created just to facilitate the implementation issues of nested TW> scopes, but I could live with them if they had been generating TW> warnings at least one release, preferably more. A note of clarification seems important here: The restrictions are not being introduced to simplify the implementation. They're being introduced because there is no sensible meaning for code that uses import * and nested scopes with free variables. There are two possible meanings, each plausible and neither satisfying. Jeremy From jeremy at alum.mit.edu Wed Feb 21 16:01:07 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 10:01:07 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <019001c09bda$ffb6f4d0$e46940d5@hagrid> References: <019001c09bda$ffb6f4d0$e46940d5@hagrid> Message-ID: <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "FL" == Fredrik Lundh writes: FL> doesn't sound like it really belongs in a X.1 release... So if we called the next release Python 3.0, it would be okay? it's-only-for-marketing-reasons-that-we-have-2.0-ly y'rs, Jeremy From jack at oratrix.nl Wed Feb 21 16:06:34 2001 From: jack at oratrix.nl (Jack Jansen) Date: Wed, 21 Feb 2001 16:06:34 +0100 Subject: [Python-Dev] Strange import behaviour, recently introduced Message-ID: <20010221150634.AB6ED371690@snelboot.oratrix.nl> I'm running into strange problems with import in frozen Mac programs. On the Mac a program is frozen in a rather different way from how it happens on Unix/Windows: basically all .pyc files are stuffed into resources, and if the import code comes across a file on sys.path it will look for PYC resources in that file. So, you freeze a program by stuffing all your modules into the interpreter executable as PYC resources and setting sys.path to contain only the executable file, basically. This week I noticed that these resource imports have suddenly become very very slow. Whereas startup time of my application used to be around 2 seconds (where the non-frozen version took 6 seconds) it now takes almost 20 times as long. The non-frozen version still takes 6 seconds. I suspect this may have something to do with recent mods to the import code, but attempts to pinpoint the problem have failed so far (somehow the profiler crashes my app). I've put a breakpoint at import.c:check_case(), and it isn't hit (as is to be expected), so that isn't the problem. Does anyone have a hint for where I could start looking? -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | ++++ see http://www.xs4all.nl/~tank/ ++++ From pedroni at inf.ethz.ch Wed Feb 21 16:10:26 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 16:10:26 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211510.QAA07814@core.inf.ethz.ch> This is becoming too much politics. > > >>>>> "TW" == Thomas Wouters writes: > > TW> On Tue, Feb 20, 2001 at 10:29:36PM -0500, Andrew Kuchling wrote: > >> Let's not waffle. If nested scopes are worth doing, they're > >> worth breaking code. > > TW> I'm sorry, but that's bull -- I mean, I disagree > TW> completely. Nested scopes *are* a nice feature, but if we can't > TW> do them without breaking code in weird ways, we shouldn't, or at > TW> least *not yet*. I am still uneasy by the restrictions seemingly > TW> created just to facilitate the implementation issues of nested > TW> scopes, but I could live with them if they had been generating > TW> warnings at least one release, preferably more. > > A note of clarification seems important here: The restrictions are > not being introduced to simplify the implementation. They're being > introduced because there is no sensible meaning for code that uses > import * and nested scopes with free variables. There are two > possible meanings, each plausible and neither satisfying. > I think that y=3 def f(): exec "y=2" def g() return y return g() with f() returning 2 would make sense (given python dynamic nature). But it is not clear if we can reach consensus on the this or another semantic. (Implementing this would be ugly, but this is not the point). On the other hand just saying that new feature X make code Y (previously valid) meaningless and so the unique solution is to discard Y as garbage, is something that cannot be sold for cheap. I have the feeling that this is the *point*. regards, Samuele Pedroni. From tony at lsl.co.uk Wed Feb 21 11:06:34 2001 From: tony at lsl.co.uk (Tony J Ibbs (Tibs)) Date: Wed, 21 Feb 2001 10:06:34 -0000 Subject: [Python-Dev] RE: Update to PEP 232 In-Reply-To: <14994.53768.767065.272158@anthem.wooz.org> Message-ID: <000901c09bed$f861d750$f05aa8c0@lslp7o.int.lsl.co.uk> Small pedantry (there's another sort?) I note that: > - __doc__ is the only function attribute that currently has > syntactic support for conveniently setting. It may be > worthwhile to eventually enhance the language for supporting > easy function attribute setting. Here are some syntaxes > suggested by PEP reviewers: [...elided to save space!...] > It isn't currently clear if special syntax is necessary or > desirable. has not been changed since the last version of the PEP. I suggest that it be updated in two ways: 1. Clarify the final statement - I seem to have the impression (sorry, can't find a message to back it up) that either the BDFL or Tim Peters is very against anything other than the "simple" #f.a = 1# sort of thing - unless I'm mischannelling (?) again. 2. Reference the thread/idea a little while back that ended with #def f(a,b) having (publish=1)# - it's certainly no *worse* than the proposals in the PEP! (Michael Hudson got as far as a patch, I think). Tibs -- Tony J Ibbs (Tibs) http://www.tibsnjoan.co.uk/ then-again-i-confuse-easily -ly y'rs - tim That's true -- I usually feel confused after reading one of your posts. - Aahz My views! Mine! Mine! (Unless Laser-Scan ask nicely to borrow them.) From pedroni at inf.ethz.ch Wed Feb 21 14:04:26 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 14:04:26 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211304.OAA29179@core.inf.ethz.ch> Hi. [As I imagined explicit syntax did not catch up and would require lot of discussions.] [GvR] > > Another way is to use special rules > > (similar to those for class defs), e.g. having > > > > > > y=3 > > def f(): > > exec "y=2" > > def g(): > > return y > > return g() > > > > print f() > > > > > > # print 3. > > > > Is that confusing for users? maybe they will more naturally expect 2 > > as outcome (given nested scopes). > > This seems the best compromise to me. It will lead to the least > broken code, because this is the behavior that we had before nested > scopes! It is also quite easy to implement given the current > implementation, I believe. > > Maybe we could introduce a warning rather than an error for this > situation though, because even if this behavior is clearly documented, > it will still be confusing to some, so it is better if we outlaw it in > some future version. > Yes this can be easy to implement but more confusing situations can arise: y=3 def f(): y=9 exec "y=2" def g(): return y return y,g() print f() What should this print? the situation leads not to a canonical solution as class def scopes. or def f(): from foo import * def g(): return y return g() print f() [Mark Hammond] > > This probably won't be a very popular suggestion, but how about pulling > > nested scopes (I assume they are at the root of the problem) > > until this can be solved cleanly? > > Agreed. While I think nested scopes are kinda cool, I have lived without > them, and really without missing them, for years. At the moment the cure > appears worse then the symptoms in at least a few cases. If nothing else, > it compromises the elegant simplicity of Python that drew me here in the > first place! > > Assuming that people really _do_ want this feature, IMO the bar should be > raised so there are _zero_ backward compatibility issues. I don't say anything about pulling nested scopes (I don't think my opinion can change things in this respect) but I should insist that without explicit syntax IMO raising the bar has a too high impl cost (both performance and complexity) or creates confusion. [Andrew Kuchling] > >Assuming that people really _do_ want this feature, IMO the bar should be > >raised so there are _zero_ backward compatibility issues. > > Even at the cost of additional implementation complexity? At the cost > of having to learn "scopes are nested, unless you do these two things > in which case they're not"? > > Let's not waffle. If nested scopes are worth doing, they're worth > breaking code. Either leave exec and from..import illegal, or back > out nested scopes, or think of some better solution, but let's not > introduce complicated backward compatibility hacks. IMO breaking code would be ok if we issue warnings today and implement nested scopes issuing errors tomorrow. But this is simply a statement about principles and raised impression. IMO import * in an inner scope should end up being an error, not sure about 'exec's. We should hear Jeremy H. position and we will need a final BDFL statement. regards, Samuele Pedroni. From skip at mojam.com Wed Feb 21 14:46:27 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 21 Feb 2001 07:46:27 -0600 (CST) Subject: [Python-Dev] I think it's time to give import * the heave ho Message-ID: <14995.50867.445071.218779@beluga.mojam.com> Jeremy> Posters of c.l.py have suggested both alternatives as the Jeremy> logical choice: (1) import * is dynamic so the static scoping Jeremy> rule ignores the names it introduces, Bad alternative. import * works just fine today and is very mature, well understood functionality. This would introduce a special case that is going to confuse people. Jeremy> (2) Python is a late binding language so the name binding Jeremy> introduced by import * is used. This has to be the only reasonable alternative. Nonetheless, as mature and well understood as import * is, the fact that it can import a variable number of unknown arguments into the current namespace creates problems. It interferes with attempts at optimization, it can introduce bugs by importing unwanted symbols, it forces programmers writing code that might be imported that way to work to keep their namespaces clean, and it encourages complications like __all__ to try and avoid namespace pollution. Now it interferes with nested scopes. There are probably more problems I haven't thought of and new ones will probably crop up in the future. The use of import * is generally discouraged in all but well-defined cases ("from Tkinter import *", "from types import *") where the code was specifically written to be imported that way. For notational brevity in interactive use you can use import as (e.g., "import Tkinter as tk"). For use in modules and scripts it's probably best to simply use import module or explicitly grab the names you need from the module you're importing ("from types import StringType, ListType"). Both would improve the readability of the importing code. The only place I can see its use being more than a notational convenience is in wrapper modules like os and re and even there, it can be avoided. I believe in the long haul the correct thing to do is to deprecate import *. Skip From skip at mojam.com Wed Feb 21 14:47:59 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 21 Feb 2001 07:47:59 -0600 (CST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <019001c09bda$ffb6f4d0$e46940d5@hagrid> References: <019001c09bda$ffb6f4d0$e46940d5@hagrid> Message-ID: <14995.50959.711260.497189@beluga.mojam.com> Fredrik> maybe it should be ifdef'ed out, and not switched on by default Fredrik> until we reach 3.0? I think that's a very reasonable path to take. Skip From fredrik at pythonware.com Wed Feb 21 16:30:35 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 21 Feb 2001 16:30:35 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <02a701c09c1b$40441e70$0900a8c0@SPIFF> > FL> doesn't sound like it really belongs in a X.1 release... > > So if we called the next release Python 3.0, it would be okay? yes. (but in case you do, I'm pretty sure someone else will release a 2.1 consisting of 2.0 plus all 2.0-compatible parts from 3.0) Cheers /F From fredrik at pythonware.com Wed Feb 21 16:42:35 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 21 Feb 2001 16:42:35 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102211510.QAA07814@core.inf.ethz.ch> Message-ID: <02bc01c09c1c$e9eb1950$0900a8c0@SPIFF> Samuele wrote: > On the other hand just saying that new feature X make code Y (previously valid) > meaningless and so the unique solution is to discard Y as garbage, > is something that cannot be sold for cheap. I have the feeling that this > is the *point*. exactly. I don't mind new features if I can chose to ignore them... Cheers /F From akuchlin at mems-exchange.org Wed Feb 21 15:56:25 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 09:56:25 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102211446.PAA07183@core.inf.ethz.ch>; from pedroni@inf.ethz.ch on Wed, Feb 21, 2001 at 03:46:40PM +0100 References: <200102211446.PAA07183@core.inf.ethz.ch> Message-ID: <20010221095625.A29605@ute.cnri.reston.va.us> On Wed, Feb 21, 2001 at 03:46:40PM +0100, Samuele Pedroni wrote: >IMO breaking code would be ok if we issue warnings today and implement >nested scopes issuing errors tomorrow. But this is simply a statement >about principles and raised impression. Agreed. So maybe that's the best solution: pull nested scopes from 2.1 and add a warning for from...import (and exec?) inside a function using nested scopes, and only add nested scopes in 2.2, after everyone has had 6 months or a year to fix their code. --amk From jeremy at alum.mit.edu Wed Feb 21 17:22:35 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 11:22:35 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <02a701c09c1b$40441e70$0900a8c0@SPIFF> References: <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> Message-ID: <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> I did a brief review of three Python projects to see how they use import * and exec and to assess how much code will break in these projects. Project Python files Lines of import * exec illegal Python code in func in func exec Python 1127 113443 4? <57 0 Zope2 469 71370 0 15 1 PyXPCOM 26 2611 0 1 1 (excluding comment lines) The numbers are a little rough for Python, because I think I've fixed all the problems. As I recall, there were four instances of import * being using in a function. I think two of those would still be flagged as errors, while two would be allowed under the current rules (only barred when the current func contains another that has free variables). There is one illegal exec in Zope and one in PyXPCOM as Mark well knows. That makes a total of 4 fixes in almost 200,000 lines of code. These fixes should be pretty easy. The code won't compile until it's fixed. One could imagine many worse problems, like code that runs but has a different meaning. I should be able to fix the tracebacks so they indicate the source of the problem more clearly. I also realized that the exec rule is still too string. If the exec statement passes an explicit namespace -- "exec in foo" -- then there shouldn't be any problem, because the executed code can't affect the current namespace. If this form is allowed, the exec errors in xpcom and Zope disappear. It would be instructive to hear if the data would look different if I chose different projects. Perhaps the particular examples I chose are simply examples of excellent coding style by master programmers. Jeremy From pedroni at inf.ethz.ch Wed Feb 21 17:33:02 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 17:33:02 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211633.RAA10095@core.inf.ethz.ch> Hi. [Fredrik Lundh] > > Samuele wrote: > > On the other hand just saying that new feature X make code Y (previously valid) > > meaningless and so the unique solution is to discard Y as garbage, > > is something that cannot be sold for cheap. I have the feeling that this > > is the *point*. > > exactly. > > I don't mind new features if I can chose to ignore them... Along this line of thought and summarizing: - import * (in an inner scope) is somehow a problem but on the long run it should be likely deprecated and become an error anyway. - mixing of inner defs or lambdas and exec is a real issue (Mark Hammond original posting was caused but such a situation): for that there is no clear workaround: I repeat y=3 def f(): exec "y=2" def g() return y return g() if we want 2 as return value it's a mess (the problem could end up being more perfomance than complexity, altough simple impl is a long-run win). Developing special rules is also not that simple: just put an y = 9 before the exec, what is expected then? This promises lot of confusion. - I'm not a partisan of this, but if we want to able to "choose to ignore" lexical scoping, we will need to make its activation explicit. but this has been discarded, so no story... Implicit scoping semantic has been changed and now we just have to convince ourself that this is a win, and there is no big code breakage (this is very likely, without irony) and that transforming working code (I'm referring to code using 'exec's not import *) in invalid code is just natural language evolution that users will understand . We can make the transition more smooth: [Andrew Kuchling] > >IMO breaking code would be ok if we issue warnings today and implement > >nested scopes issuing errors tomorrow. But this is simply a statement > >about principles and raised impression. > > Agreed. So maybe that's the best solution: pull nested scopes from > 2.1 and add a warning for from...import (and exec?) inside a function > using nested scopes, and only add nested scopes in 2.2, after everyone > has had 6 months or a year to fix their code. But the problem with exec will remain. PS: to be honest the actual impl of nested scope is fine for me from the viewpoint of the guy that should implement that for jython ;). From thomas.heller at ion-tof.com Wed Feb 21 17:39:09 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Wed, 21 Feb 2001 17:39:09 +0100 Subject: [Python-Dev] Strange import behaviour, recently introduced References: <20010221150634.AB6ED371690@snelboot.oratrix.nl> Message-ID: <036b01c09c24$d0aa20a0$e000a8c0@thomasnotebook> Jack Jansen wrote: > I'm running into strange problems with import in frozen Mac programs. > > On the Mac a program is frozen in a rather different way from how it happens > on Unix/Windows: basically all .pyc files are stuffed into resources, and if > the import code comes across a file on sys.path it will look for PYC resources > in that file. So, you freeze a program by stuffing all your modules into the > interpreter executable as PYC resources and setting sys.path to contain only > the executable file, basically. > > This week I noticed that these resource imports have suddenly become very very > slow. Whereas startup time of my application used to be around 2 seconds > (where the non-frozen version took 6 seconds) it now takes almost 20 times as > long. The non-frozen version still takes 6 seconds. > The most recent version calls PyImport_ImportModuleEx() for '__builtin__' for every import of __builtin__ without caching the result in a static variable. Can this be the cause? Thomas Heller From pedroni at inf.ethz.ch Wed Feb 21 17:40:24 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 17:40:24 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211640.RAA10296@core.inf.ethz.ch> Hi. So few code breakage is nice. [Jeremy Hilton] > I also realized that the exec rule is still too string. If the exec > statement passes an explicit namespace -- "exec in foo" -- then there > shouldn't be any problem, because the executed code can't affect the > current namespace. If this form is allowed, the exec errors in xpcom > and Zope disappear. My very personal feeling is that *any* rule on exec just sounds arbitrary (even if motived and acceptable). regards, Samuele Pedroni. From esr at thyrsus.com Wed Feb 21 17:42:18 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Wed, 21 Feb 2001 11:42:18 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221095625.A29605@ute.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Wed, Feb 21, 2001 at 09:56:25AM -0500 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> Message-ID: <20010221114218.A24682@thyrsus.com> Andrew Kuchling : > On Wed, Feb 21, 2001 at 03:46:40PM +0100, Samuele Pedroni wrote: > >IMO breaking code would be ok if we issue warnings today and implement > >nested scopes issuing errors tomorrow. But this is simply a statement > >about principles and raised impression. > > Agreed. So maybe that's the best solution: pull nested scopes from > 2.1 and add a warning for from...import (and exec?) inside a function > using nested scopes, and only add nested scopes in 2.2, after everyone > has had 6 months or a year to fix their code. Aaargghh! I'm already using them. If we disable this facility temporarily, please do it with an ifdef I can set. -- Eric S. Raymond The prestige of government has undoubtedly been lowered considerably by the Prohibition law. For nothing is more destructive of respect for the government and the law of the land than passing laws which cannot be enforced. It is an open secret that the dangerous increase of crime in this country is closely connected with this. -- Albert Einstein, "My First Impression of the U.S.A.", 1921 From jeremy at alum.mit.edu Wed Feb 21 17:45:30 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 11:45:30 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102211640.RAA10296@core.inf.ethz.ch> References: <200102211640.RAA10296@core.inf.ethz.ch> Message-ID: <14995.61610.382858.122618@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SP" == Samuele Pedroni writes: SP> My very personal feeling is that *any* rule on exec just sounds SP> arbitrary (even if motived and acceptable). My personal feeling is that exec is used rarely enough that a few restrictions on its use is not a problem. The restriction can be fairly minimal -- "exec" without "in" is not allowed in a function that contains nested blocks with free variables. Heck, we would just outlaw all uses of exec without in <0.5 wink>. I would argue for this rule in Python 3000, but it would break a lot more code than the restriction proposed above. Jeremy From pedroni at inf.ethz.ch Wed Feb 21 17:51:30 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 17:51:30 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211651.RAA10549@core.inf.ethz.ch> I should reformulate: I think a possible not arbitrary rule for exec is only exec ... in ... is valid, but this also something ok only on the long-run (like import * deprecation). Then it is necessary to agree on the semantic of locals(). What would happen right now mixing lexical scoping and exec ... in locals()? regards, Samuele Pedroni. From fredrik at pythonware.com Wed Feb 21 18:04:59 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 21 Feb 2001 18:04:59 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> Message-ID: <00ca01c09c28$70ea44c0$e46940d5@hagrid> Andrew Kuchling wrote: > >IMO breaking code would be ok if we issue warnings today and implement > >nested scopes issuing errors tomorrow. But this is simply a statement > >about principles and raised impression. > > Agreed. So maybe that's the best solution: pull nested scopes from > 2.1 and add a warning for from...import (and exec?) inside a function > using nested scopes, and only add nested scopes in 2.2, after everyone > has had 6 months or a year to fix their code. don't we have a standard procedure for this? http://python.sourceforge.net/peps/pep-0005.html Steps For Introducing Backwards-Incompatible Features 1. Propose backwards-incompatible behavior in a PEP. 2. Once the PEP is accepted as a productive direction, implement an alternate way to accomplish the task previously provided by the feature that is being removed or changed. 3. Formally deprecate the obsolete construct in the Python documentation. 4. Add an an optional warning mode to the parser that will inform users when the deprecated construct is used. 5. There must be at least a one-year transition period between the release of the transitional version of Python and the release of the backwards incompatible version. looks like we're somewhere around stage 3, which means that we're 12+ months away from deployment. Cheers /F From jeremy at alum.mit.edu Wed Feb 21 17:58:02 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 11:58:02 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102211651.RAA10549@core.inf.ethz.ch> References: <200102211651.RAA10549@core.inf.ethz.ch> Message-ID: <14995.62362.374756.796362@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SP" == Samuele Pedroni writes: SP> I should reformulate: I think a possible not arbitrary rule for SP> exec is only exec ... in ... is valid, but this also something SP> ok only on the long-run (like import * deprecation). Yes. SP> Then it is necessary to agree on the semantic of locals(). That's easy. Make the warning in the current documentation a feature: locals() returns a dictionary representing the local symbol table. The effects of modifications to this dictionary is undefined. SP> What would happen right now mixing lexical scoping and exec SP> ... in locals()? Right now, the exec would get flagged as an error. If it were allowed to execute, the exec would operator on the frame's f_locals dict. The locals() builtin calls the following function. PyObject * PyEval_GetLocals(void) { PyFrameObject *current_frame = PyThreadState_Get()->frame; if (current_frame == NULL) return NULL; PyFrame_FastToLocals(current_frame); return current_frame->f_locals; } This copies all variables from the fast slots into the f_locals dictionary. When the exec statement is executed, it does the reverse copying from the locals dict back into the fast slots. The FastToLocals and LocalsToFast functions don't know anything about the closure, so those variables simply wouldn't affected. Assignments in the exec would be ignored by nested scopes. Jeremy From jeremy at alum.mit.edu Wed Feb 21 18:02:34 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 12:02:34 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <00ca01c09c28$70ea44c0$e46940d5@hagrid> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> Message-ID: <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> I don't recall seeing any substanital discussion of this PEP on python-dev or python-list, nor do I recall a BDFL decision on the PEP. There has been lots of discussion about backwards compatibility, but not much consensus. Jeremy From moshez at zadka.site.co.il Wed Feb 21 18:06:17 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Wed, 21 Feb 2001 19:06:17 +0200 (IST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221114218.A24682@thyrsus.com> References: <20010221114218.A24682@thyrsus.com>, <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> Message-ID: <20010221170617.DAE72A840@darjeeling.zadka.site.co.il> On Wed, 21 Feb 2001 11:42:18 -0500, "Eric S. Raymond" wrote: [re: disabling nested scopes] > Aaargghh! I'm already using them. That's not a valid excuse. The official position of Python-Dev regarding alphas is "a feature is not in until it's a release candidate -- we reserve the right to pull features before" Whatever we do, ifdefing is not the answer -- two incompat. versions of Python with the same number? Are we insane? -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez at debian.org |http://www.{python,debian,gnu}.org From fredrik at effbot.org Wed Feb 21 19:01:05 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Wed, 21 Feb 2001 19:01:05 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <002301c09c30$46a89330$e46940d5@hagrid> Jeremy Hylton wrote: > I don't recall seeing any substanital discussion of this PEP on > python-dev or python-list, nor do I recall a BDFL decision on the > PEP. There has been lots of discussion about backwards compatibility, > but not much consensus. Really? If that's the case, maybe someone should move it to the "future" or "pie-in-the-sky" section, and mark it as "draft" instead of "active"? ::: ...and if stepwise deprecation isn't that important, why did a certain BDFL bother to implement a warning frame- work for 2.1? http://python.sourceforge.net/peps/pep-0230.html Looks like the perfect tool for this task. Why not use it? ::: Is it time to shut down python-dev? (yes, I'm serious) Annoyed /F From thomas at xs4all.net Wed Feb 21 19:13:17 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 21 Feb 2001 19:13:17 +0100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <002301c09c30$46a89330$e46940d5@hagrid>; from fredrik@effbot.org on Wed, Feb 21, 2001 at 07:01:05PM +0100 References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> Message-ID: <20010221191317.A26647@xs4all.nl> On Wed, Feb 21, 2001 at 07:01:05PM +0100, Fredrik Lundh wrote: > Is it time to shut down python-dev? (yes, I'm serious) Just in case it might not be obvious, I concur with Fredrik, and I usually try to have a bit less of a temper than him. I have to warn, though, I just came from a meeting with Ministry of Justice lawyers, so I'm not in that good a mood, though my mood does force me to drop my politeness and just say what I really mean: I keep running into the ugly sides of the principle of nested scopes in python, and the implementation in particular. Most of them could be fixed, but not *all* of them, and the impact of those that can't be fixed is entirely unclear. Will it break a lot of code ? Possibly. Will it annoy a lot of people ? Quite certainly, it already did. Will it force people to turn away in disgust ? Definately possibly, since it's nearly doing that for *me*. I'm not sure if I'd want to admit to people that I'm a Python developper if that means they'll ask me why in hell 2.1 was released with that deficiency. I have been able to argue my way out of the gripes I currently get, but I'm not sure if I can do that for 2.1. I think adding nested scopes like this is a very bad idea. Patching up the problems by adding more special cases in which the old syntax would work is not the right solution, even though I did initially think so. And I'd like to note that none of these issues were addressed in the PEP. The PEP doesn't even mention them, though 'from Tkinter import *' is used as an example code snippet. And it seems most people are either indifferent or against the whole thing. I personally think the old 'hack' is *way* clearer, and more obvious, than the nested scopes patch. But maybe my perception is flawed. Maybe all the pro-nested-scopes, pro-breakage people are keeping quiet, in which case I'll quietly sulk away in a corner ;P Mr.-Conservatively-Grumpy-ly y'rs, -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From esr at thyrsus.com Wed Feb 21 19:23:41 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Wed, 21 Feb 2001 13:23:41 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221191317.A26647@xs4all.nl>; from thomas@xs4all.net on Wed, Feb 21, 2001 at 07:13:17PM +0100 References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> Message-ID: <20010221132341.B25139@thyrsus.com> Thomas Wouters : > But maybe my perception is flawed. Maybe all the pro-nested-scopes, > pro-breakage people are keeping quiet, in which case I'll quietly sulk away > in a corner ;P I am for nested scopes. I would like to see the problems fixed and this feature not abandoned. -- Eric S. Raymond Yes, the president should resign. He has lied to the American people, time and time again, and betrayed their trust. Since he has admitted guilt, there is no reason to put the American people through an impeachment. He will serve absolutely no purpose in finishing out his term, the only possible solution is for the president to save some dignity and resign. -- 12th Congressional District hopeful Bill Clinton, during Watergate From pedroni at inf.ethz.ch Wed Feb 21 19:54:06 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 19:54:06 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211854.TAA12664@core.inf.ethz.ch> I will try to be intellectually honest: [Thomas Wouters] > And I'd like to note that none of these issues were addressed in the PEP. This also a *point*. Few days ago I have scanned the pre-checkin archive on this topic, the fix-point was, under BDFL influence: - It will not do that much harm (but many issues were not raised) - Please no explicit syntax - Let's do it - Future newbies will be thankful because this was always a confusing point for them (if they come from pascal-like languages?) I should admit that I like the idea of nested scopes, because I like functional programming style, but I don't know whether this returning 3 is nice ;)? def f(): def g(): return y # put as many innoncent code lines as you like y=3 return g() The point is that nested scopes cause some harm, not that much but people are asking themself whether is that necessary. Maybe the request that old code should compile as it is, is a bit pedantic, and making it always work but with a new semantic is worse. But simply catching up as problem arise does not give a good impression. It really seems that there's not been enough discussion about the change, and I think that is also ok to honestely be worried about what user will feel about this? (and we can only think about this beacuse the feedback is not that much) Will this code breakage "scare" them and slow down migration to new versions of python? They are already afraid of going 2.0(?). It is maybe just PR matter but ... The *point* is that we are not going from version 0.8 to version 0.9 of our toy research lisp dialect, passing from dynamic scoping to lexical scoping. (Yes, I think, that changing semantic behind the scene is not a polite move.) We really need the BDFL proposing the right thing. regards, Samuele Pedroni. From pedroni at inf.ethz.ch Wed Feb 21 20:02:58 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 21 Feb 2001 20:02:58 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102211902.UAA12859@core.inf.ethz.ch> Sorry I forgot that a win is avoiding th old lambda default hack. Now things magically work ;). From jeremy at alum.mit.edu Wed Feb 21 20:09:43 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 14:09:43 -0500 (EST) Subject: [Python-Dev] Update to PEP 227 (static scoping) Message-ID: <14996.4727.604581.858363@w221.z064000254.bwi-md.dsl.cnc.net> There has been renewed discussion of backwards compatibility issues introduced by nested scopes. Following some discussion on python-dev, I have updated the discussion of these issues in the PEP. Of course, more comments are welcome. I am particularly interested in reports of actual compatibility issues with existing code, as opposed to hypotheticals. The particular concerns raised lately have to do with previously legal code that will fail with a SyntaxError with nested scopes. Early in the design process, there was discussion of code that will behave differently with nested scopes. At the time, the subtle behavior change was considered acceptable because it was believed to occur rarely in practice and was probably hard to understand to begin with. A related issue, already discussed on both lists, was the restrictions added in Python 2.1a2 on the use of import * in functions and exec with nested scope. The former restriction was always documented in the reference manual, but never enforced. Subsequently, we decided to allow import * and exec except in cases where the meaning was ambiguous with respect to nested scopes. This probably sounds a bit abstract; I hope the PEP (included below) spells out the issues more clearly. If you have code that currently depends on any of the three following behaviors, I'd like to hear about it: - A function is contained within another function. The outer function contains a local name that shadows a global name. The inner function uses the global. The one case of this I have seen in the wild was caused by a local variable named str in the outer function and a use of builtin str in the inner function. - A function that contains a nested function with free variables and also uses exec that does not specify a namespace, e.g. def f(): exec foo def g(): ... "exec foo in ns" should be legal, although the current CVS code base does not yet allow it. - A function like the one above, except that is uses import * instead of exec. Jeremy PEP: 227 Title: Statically Nested Scopes Version: $Revision: 1.6 $ Author: jeremy at digicool.com (Jeremy Hylton) Status: Draft Type: Standards Track Python-Version: 2.1 Created: 01-Nov-2000 Post-History: XXX what goes here? Abstract This PEP proposes the addition of statically nested scoping (lexical scoping) for Python 2.1. The current language definition defines exactly three namespaces that are used to resolve names -- the local, global, and built-in namespaces. The addition of nested scopes would allow resolution of unbound local names in enclosing functions' namespaces. One consequence of this change that will be most visible to Python programs is that lambda statements could reference variables in the namespaces where the lambda is defined. Currently, a lambda statement uses default arguments to explicitly creating bindings in the lambda's namespace. Introduction This proposal changes the rules for resolving free variables in Python functions. The Python 2.0 definition specifies exactly three namespaces to check for each name -- the local namespace, the global namespace, and the builtin namespace. According to this defintion, if a function A is defined within a function B, the names bound in B are not visible in A. The proposal changes the rules so that names bound in B are visible in A (unless A contains a name binding that hides the binding in B). The specification introduces rules for lexical scoping that are common in Algol-like languages. The combination of lexical scoping and existing support for first-class functions is reminiscent of Scheme. The changed scoping rules address two problems -- the limited utility of lambda statements and the frequent confusion of new users familiar with other languages that support lexical scoping, e.g. the inability to define recursive functions except at the module level. The lambda statement introduces an unnamed function that contains a single statement. It is often used for callback functions. In the example below (written using the Python 2.0 rules), any name used in the body of the lambda must be explicitly passed as a default argument to the lambda. from Tkinter import * root = Tk() Button(root, text="Click here", command=lambda root=root: root.test.configure(text="...")) This approach is cumbersome, particularly when there are several names used in the body of the lambda. The long list of default arguments obscure the purpose of the code. The proposed solution, in crude terms, implements the default argument approach automatically. The "root=root" argument can be omitted. Specification Python is a statically scoped language with block structure, in the traditional of Algol. A code block or region, such as a module, class defintion, or function body, is the basic unit of a program. Names refer to objects. Names are introduced by name binding operations. Each occurrence of a name in the program text refers to the binding of that name established in the innermost function block containing the use. The name binding operations are assignment, class and function definition, and import statements. Each assignment or import statement occurs within a block defined by a class or function definition or at the module level (the top-level code block). If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. (Note: This can lead to errors when a name is used within a block before it is bound.) If the global statement occurs within a block, all uses of the name specified in the statement refer to the binding of that name in the top-level namespace. Names are resolved in the top-level namespace by searching the global namespace, the namespace of the module containing the code block, and the builtin namespace, the namespace of the module __builtin__. The global namespace is searched first. If the name is not found there, the builtin namespace is searched. If a name is used within a code block, but it is not bound there and is not declared global, the use is treated as a reference to the nearest enclosing function region. (Note: If a region is contained within a class definition, the name bindings that occur in the class block are not visible to enclosed functions.) A class definition is an executable statement that may uses and definitions of names. These references follow the normal rules for name resolution. The namespace of the class definition becomes the attribute dictionary of the class. The following operations are name binding operations. If they occur within a block, they introduce new local names in the current block unless there is also a global declaration. Function defintion: def name ... Class definition: class name ... Assignment statement: name = ... Import statement: import name, import module as name, from module import name Implicit assignment: names are bound by for statements and except clauses The arguments of a function are also local. There are several cases where Python statements are illegal when used in conjunction with nested scopes that contain free variables. If a variable is referenced in an enclosing scope, it is an error to delete the name. The compiler will raise a SyntaxError for 'del name'. If the wildcard form of import (import *) is used in a function and the function contains a nested block with free variables, the compiler will raise a SyntaxError. If exec is used in a function and the function contains a nested block with free variables, the compiler will raise a SyntaxError unless the exec explicit specifies the local namespace for the exec. (In other words, "exec obj" would be illegal, but "exec obj in ns" would be legal.) Discussion The specified rules allow names defined in a function to be referenced in any nested function defined with that function. The name resolution rules are typical for statically scoped languages, with three primary exceptions: - Names in class scope are not accessible. - The global statement short-circuits the normal rules. - Variables are not declared. Names in class scope are not accessible. Names are resolved in the innermost enclosing function scope. If a class defintion occurs in a chain of nested scopes, the resolution process skips class definitions. This rule prevents odd interactions between class attributes and local variable access. If a name binding operation occurs in a class defintion, it creates an attribute on the resulting class object. To access this variable in a method, or in a function nested within a method, an attribute reference must be used, either via self or via the class name. An alternative would have been to allow name binding in class scope to behave exactly like name binding in function scope. This rule would allow class attributes to be referenced either via attribute reference or simple name. This option was ruled out because it would have been inconsistent with all other forms of class and instance attribute access, which always use attribute references. Code that used simple names would have been obscure. The global statement short-circuits the normal rules. Under the proposal, the global statement has exactly the same effect that it does for Python 2.0. It's behavior is preserved for backwards compatibility. It is also noteworthy because it allows name binding operations performed in one block to change bindings in another block (the module). Variables are not declared. If a name binding operation occurs anywhere in a function, then that name is treated as local to the function and all references refer to the local binding. If a reference occurs before the name is bound, a NameError is raised. The only kind of declaration is the global statement, which allows programs to be written using mutable global variables. As a consequence, it is not possible to rebind a name defined in an enclosing scope. An assignment operation can only bind a name in the current scope or in the global scope. The lack of declarations and the inability to rebind names in enclosing scopes are unusual for lexically scoped languages; there is typically a mechanism to create name bindings (e.g. lambda and let in Scheme) and a mechanism to change the bindings (set! in Scheme). XXX Alex Martelli suggests comparison with Java, which does not allow name bindings to hide earlier bindings. Examples A few examples are included to illustrate the way the rules work. XXX Explain the examples >>> def make_adder(base): ... def adder(x): ... return base + x ... return adder >>> add5 = make_adder(5) >>> add5(6) 11 >>> def make_fact(): ... def fact(n): ... if n == 1: ... return 1L ... else: ... return n * fact(n - 1) ... return fact >>> fact = make_fact() >>> fact(7) 5040L >>> def make_wrapper(obj): ... class Wrapper: ... def __getattr__(self, attr): ... if attr[0] != '_': ... return getattr(obj, attr) ... else: ... raise AttributeError, attr ... return Wrapper() >>> class Test: ... public = 2 ... _private = 3 >>> w = make_wrapper(Test()) >>> w.public 2 >>> w._private Traceback (most recent call last): File " ", line 1, in ? AttributeError: _private An example from Tim Peters of the potential pitfalls of nested scopes in the absence of declarations: i = 6 def f(x): def g(): print i # ... # skip to the next page # ... for i in x: # ah, i *is* local to f, so this is what g sees pass g() The call to g() will refer to the variable i bound in f() by the for loop. If g() is called before the loop is executed, a NameError will be raised. XXX need some counterexamples Backwards compatibility There are two kinds of compatibility problems caused by nested scopes. In one case, code that behaved one way in earlier versions, behaves differently because of nested scopes. In the other cases, certain constructs interact badly with nested scopes and will trigger SyntaxErrors at compile time. The following example from Skip Montanaro illustrates the first kind of problem: x = 1 def f1(): x = 2 def inner(): print x inner() Under the Python 2.0 rules, the print statement inside inner() refers to the global variable x and will print 1 if f1() is called. Under the new rules, it refers to the f1()'s namespace, the nearest enclosing scope with a binding. The problem occurs only when a global variable and a local variable share the same name and a nested function uses that name to refer to the global variable. This is poor programming practice, because readers will easily confuse the two different variables. One example of this problem was found in the Python standard library during the implementation of nested scopes. To address this problem, which is unlikely to occur often, a static analysis tool that detects affected code will be written. The detection problem is straightfoward. The other compatibility problem is casued by the use of 'import *' and 'exec' in a function body, when that function contains a nested scope and the contained scope has free variables. For example: y = 1 def f(): exec "y = 'gotcha'" # or from module import * def g(): return y ... At compile-time, the compiler cannot tell whether an exec that operators on the local namespace or an import * will introduce name bindings that shadow the global y. Thus, it is not possible to tell whether the reference to y in g() should refer to the global or to a local name in f(). In discussion of the python-list, people argued for both possible interpretations. On the one hand, some thought that the reference in g() should be bound to a local y if one exists. One problem with this interpretation is that it is impossible for a human reader of the code to determine the binding of y by local inspection. It seems likely to introduce subtle bugs. The other interpretation is to treat exec and import * as dynamic features that do not effect static scoping. Under this interpretation, the exec and import * would introduce local names, but those names would never be visible to nested scopes. In the specific example above, the code would behave exactly as it did in earlier versions of Python. Since each interpretation is problemtatic and the exact meaning ambiguous, the compiler raises an exception. A brief review of three Python projects (the standard library, Zope, and a beta version of PyXPCOM) found four backwards compatibility issues in approximately 200,000 lines of code. There was one example of case #1 (subtle behavior change) and two examples of import * problems in the standard library. (The interpretation of the import * and exec restriction that was implemented in Python 2.1a2 was much more restrictive, based on language that in the reference manual that had never been enforced. These restrictions were relaxed following the release.) locals() / vars() These functions return a dictionary containing the current scope's local variables. Modifications to the dictionary do not affect the values of variables. Under the current rules, the use of locals() and globals() allows the program to gain access to all the namespaces in which names are resolved. An analogous function will not be provided for nested scopes. Under this proposal, it will not be possible to gain dictionary-style access to all visible scopes. Rebinding names in enclosing scopes There are technical issues that make it difficult to support rebinding of names in enclosing scopes, but the primary reason that it is not allowed in the current proposal is that Guido is opposed to it. It is difficult to support, because it would require a new mechanism that would allow the programmer to specify that an assignment in a block is supposed to rebind the name in an enclosing block; presumably a keyword or special syntax (x := 3) would make this possible. The proposed rules allow programmers to achieve the effect of rebinding, albeit awkwardly. The name that will be effectively rebound by enclosed functions is bound to a container object. In place of assignment, the program uses modification of the container to achieve the desired effect: def bank_account(initial_balance): balance = [initial_balance] def deposit(amount): balance[0] = balance[0] + amount return balance def withdraw(amount): balance[0] = balance[0] - amount return balance return deposit, withdraw Support for rebinding in nested scopes would make this code clearer. A class that defines deposit() and withdraw() methods and the balance as an instance variable would be clearer still. Since classes seem to achieve the same effect in a more straightforward manner, they are preferred. Implementation The implementation for C Python uses flat closures [1]. Each def or lambda statement that is executed will create a closure if the body of the function or any contained function has free variables. Using flat closures, the creation of closures is somewhat expensive but lookup is cheap. The implementation adds several new opcodes and two new kinds of names in code objects. A variable can be either a cell variable or a free variable for a particular code object. A cell variable is referenced by containing scopes; as a result, the function where it is defined must allocate separate storage for it on each invocation. A free variable is reference via a function's closure. XXX Much more to say here References [1] Luca Cardelli. Compiling a functional language. In Proc. of the 1984 ACM Conference on Lisp and Functional Programming, pp. 208-217, Aug. 1984 http://citeseer.nj.nec.com/cardelli84compiling.html From akuchlin at mems-exchange.org Wed Feb 21 20:33:23 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 14:33:23 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221191317.A26647@xs4all.nl>; from thomas@xs4all.net on Wed, Feb 21, 2001 at 07:13:17PM +0100 References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> Message-ID: <20010221143323.B1441@ute.cnri.reston.va.us> On Wed, Feb 21, 2001 at 07:13:17PM +0100, Thomas Wouters wrote: >But maybe my perception is flawed. Maybe all the pro-nested-scopes, >pro-breakage people are keeping quiet, in which case I'll quietly sulk away >in a corner ;P The scoping rules are, IMHO, the most serious problem listed on the Python Warts page, and adding nested scopes fixes them. So it's nice that this flaw could be cleaned up, though people will naturally differ in their perceptions of how serious the problem is, and how much pain it's worth to fix it. >On Wed, Feb 21, 2001 at 07:01:05PM +0100, Fredrik Lundh wrote: >> Is it time to shut down python-dev? (yes, I'm serious) I've previously stated my intention to unsubscribe from python-dev after 2.1 ships, mostly because hacking on the Python core has ceased to be fun any more, and because my non-core projects have suffered. Once that happens, the incentive to try out new Python versions will really ebb; if I wasn't on python-dev, I don't think upgrading to 2.1 would be a big priority because none of its new features solve any burning problems for me. It's hard to say what compelling new features would make me enthuastically adopt 2.2 as soon as it comes out, and I can't really think of any -- perhaps interfaces would be such a feature. You can take that as lukewarm agreement with Fredrik's rhetorical suggestion. --amk From jeremy at alum.mit.edu Wed Feb 21 20:35:02 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 14:35:02 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221143323.B1441@ute.cnri.reston.va.us> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> <20010221143323.B1441@ute.cnri.reston.va.us> Message-ID: <14996.6246.44518.351404@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "AMK" == Andrew Kuchling writes: >> On Wed, Feb 21, 2001 at 07:01:05PM +0100, Fredrik Lundh wrote: >>> Is it time to shut down python-dev? (yes, I'm serious) AMK> I've previously stated my intention to unsubscribe from AMK> python-dev after 2.1 ships, mostly because hacking on the AMK> Python core has ceased to be fun any more, and because my AMK> non-core projects have suffered. We're coming up on the second anniversary of python-dev. It began in April 1999 if the archives are correct. The biggest change to Python development since then has been the move to SourceForge, which happened nine months ago. (Curiously enough, the first python-dev message is on April 21, the SF announcement was on May 21, and today is Feb. 21.) Do you think Python development has changed in ways that make it no longer fun? Or do you think that you've changed in ways that make you no longer enjoy Python development? I'm sure that it's not as simple as one or the other, but I wonder if you think changes in the way we all interact is an important contributing factor. Jeremy From akuchlin at mems-exchange.org Wed Feb 21 20:50:16 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 14:50:16 -0500 Subject: [Python-Dev] Notice: Beta of wininst with uninstaller Message-ID: Thomas Heller just sent a message to the Distutils SIG described a proposed uninstaller for the bdist_wininst command. Windows-oriented people who don't follow the SIG may want to take a look at his proposal and offer comments. His message is archived at: http://mail.python.org/pipermail/distutils-sig/2001-February/001991.html --amk From akuchlin at mems-exchange.org Wed Feb 21 21:02:33 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 15:02:33 -0500 Subject: [Python-Dev] Re: dl module Message-ID: On 10 Feb, GvR quoted and wrote: >> Skip Montanaro writes: >> > MAL> The same could be done for e.g. soundex ... >> >> Fred Drake wrote: >> Given that Skip has published this module and that the C version can >> always be retrieved from CVS if anyone really wants it, and that >> soundex has been listed in the "Obsolete Modules" section in the >> documentation for quite some time, this is probably a good time to >> remove it from the source distribution. > >Yes, go ahead. Guido, did you mean go ahead and remove soundex, or the dl module, or both? --amk From akuchlin at mems-exchange.org Wed Feb 21 21:05:17 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 15:05:17 -0500 Subject: [Python-Dev] python-dev social climate In-Reply-To: <14996.6246.44518.351404@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Wed, Feb 21, 2001 at 02:35:02PM -0500 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> <20010221143323.B1441@ute.cnri.reston.va.us> <14996.6246.44518.351404@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010221150517.D1441@ute.cnri.reston.va.us> On Wed, Feb 21, 2001 at 02:35:02PM -0500, Jeremy Hylton wrote: >Do you think Python development has changed in ways that make it no >longer fun? Or do you think that you've changed in ways that make you >no longer enjoy Python development? I'm sure that it's not as simple Mostly me; I'm trying to decrease my CPU load and have dropped a number of activities. I've mostly lost my taste for language hackery, and find that the discussions are getting more trivial and less interesting. Adding Unicode support, for example, was a lengthy and at times bloody discussion, but it resulted in a significant new capability. Debate about whether 'A in dict' is the same as 'A in dict.keys()' or 'A in dict.values()' is IMHO quite dull. Twhe unit testing debate was the last one I cared about to any significant degree. --amk From thomas.heller at ion-tof.com Wed Feb 21 21:17:56 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Wed, 21 Feb 2001 21:17:56 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> <20010221143323.B1441@ute.cnri.reston.va.us> Message-ID: <00cf01c09c43$60e360f0$e000a8c0@thomasnotebook> Andrew Kuchling wrote: > The scoping rules are, IMHO, the most serious problem listed on the > Python Warts page, and adding nested scopes fixes them. There is some truth in this, although most books I know try hard to explain this. Once you've understood it, it becomes a second nature to use this knowledge for lambda. I would consider the type/class split, making something like ExtensionClass neccessary, much more annoying for the advanced programmer. IMHO more efforts should go into this issue _even before_ p3000. Regards, Thomas From skip at mojam.com Wed Feb 21 21:52:48 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 21 Feb 2001 14:52:48 -0600 (CST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> References: <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14996.10912.667104.603750@beluga.mojam.com> Jeremy> That makes a total of 4 fixes in almost 200,000 lines of code. Jeremy> These fixes should be pretty easy. Jeremy, Pardon my bluntness, but I think you're missing the point. The fact that it would be easy to make these changes for version N+1 of package XYZ ignores the fact that users of XYZ version N may want to upgrade to Python 2.1 for whatever reason, but can't easily upgrade to XYZ version N+1. Maybe they need to pay an upgrade fee. Maybe they include XYZ in another product and can't afford to run too far ahead of their clients. Maybe XYZ is available to them only as bytecode. Maybe there's just too darn much code to pore through and retest. Maybe ... I've rarely found it difficult to fix compatibility problems in isolation. It's the surrounding context that gets you. Skip From fredrik at effbot.org Wed Feb 21 22:12:03 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Wed, 21 Feb 2001 22:12:03 +0100 Subject: [Python-Dev] compile leaks memory. lots of memory. Message-ID: <009301c09c4a$f26cbf60$e46940d5@hagrid> while 1: compile("print 'hello'\n", " ", "exec") current CVS leaks just over 1k per call to compile. 1.5.2 and 2.0 doesn't leak a byte. make the script a little more complex, and it leaks even more (4k for a small function, 650k for Tkinter.py, etc). Cheers /F From jeremy at alum.mit.edu Wed Feb 21 22:07:25 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 16:07:25 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14996.10912.667104.603750@beluga.mojam.com> References: <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> <14996.10912.667104.603750@beluga.mojam.com> Message-ID: <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro writes: Jeremy> That makes a total of 4 fixes in almost 200,000 lines of Jeremy> code. These fixes should be pretty easy. SM> Jeremy, SM> Pardon my bluntness, but I think you're missing the point. I don't mind if you're blunt :-). SM> I've rarely found it difficult to fix compatibility problems in SM> isolation. It's the surrounding context that gets you. I appreciate that there are compatibility problems, although I'm hard pressed to quantify them to any extent. My employer still uses Python 1.5.2 because of perceived compatibility problems, although I use Zope with 2.1 on my machine. Any change we make to Python that introduces incompatibilties is going to make it hard for some people to upgrade. When we began work on the 2.1 alpha cycle, I have the impression that we decided that some amount of incompatibility is acceptable. I think PEP 227 is the chief incompatibility, but there are other changes. For example, the warnings framework now spits out messages to stderr; I imagine this could be unacceptable in some situtations. The __all__ change might cause problems for some code, as we saw with the pickle module. The format of exceptions has changed in some cases, which makes trouble for users of doctest. I'll grant you that there is are differences in degree among these various changes. Nonetheless, any of them could be a potential roadblock for upgrading. There were a bunch more in 2.0. (Sidenote: If you haven't upgraded to 2.0 yet, then you can jump right to 2.1 when you finally do.) The recent flurry of discussion was generated by a single complaint about the exec problem. It appeared to me that this was the last straw for many people, and you, among others, suggested today that we delay nested scopes. This surprised me, because the problem was much shallower than some of the other compatibility issues that had been discussed earlier, including the one attributed to you in the PEP. If I understand correctly, though, you are objecting to any changes that introduce backwards compatibility. The fact that recent discussion prompted you to advocate this is coincidental. The question, then, is whether some amount of incompatible change is acceptable in the 2.1 release. I don't think the specific import */exec issues have anything to do with it, because if they didn't exist there would still be compatibility issues. Jeremy From barry at digicool.com Wed Feb 21 22:19:47 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Wed, 21 Feb 2001 16:19:47 -0500 Subject: [Python-Dev] compile leaks memory. lots of memory. References: <009301c09c4a$f26cbf60$e46940d5@hagrid> Message-ID: <14996.12531.749097.806945@anthem.wooz.org> >>>>> "FL" == Fredrik Lundh writes: FL> while 1: compile("print 'hello'\n", " ", "exec") FL> current CVS leaks just over 1k per call to compile. FL> 1.5.2 and 2.0 doesn't leak a byte. FL> make the script a little more complex, and it leaks even FL> more (4k for a small function, 650k for Tkinter.py, etc). I have plans to spend a fair bit of time running memory/leak analysis over Python after the conference. I'm kind of waiting until we enter beta, i.e. feature freeze. -Barry From jeremy at alum.mit.edu Wed Feb 21 22:10:15 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 16:10:15 -0500 (EST) Subject: [Python-Dev] compile leaks memory. lots of memory. In-Reply-To: <14996.12531.749097.806945@anthem.wooz.org> References: <009301c09c4a$f26cbf60$e46940d5@hagrid> <14996.12531.749097.806945@anthem.wooz.org> Message-ID: <14996.11959.173739.282750@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "BAW" == Barry A Warsaw writes: >>>>> "FL" == Fredrik Lundh writes: FL> while 1: compile("print 'hello'\n", " ", "exec") FL> current CVS leaks just over 1k per call to compile. FL> 1.5.2 and 2.0 doesn't leak a byte. FL> make the script a little more complex, and it leaks even more FL> (4k for a small function, 650k for Tkinter.py, etc). BAW> I have plans to spend a fair bit of time running memory/leak BAW> analysis over Python after the conference. I'm kind of waiting BAW> until we enter beta, i.e. feature freeze. It would be helpful to get some analysis on this known problem before the beta release. Jeremy From paulp at ActiveState.com Wed Feb 21 22:48:28 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Wed, 21 Feb 2001 13:48:28 -0800 Subject: [Python-Dev] Backwards Incompatibility References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A9437AC.4B2C77E7@ActiveState.com> Jeremy Hylton wrote: > > I don't recall seeing any substanital discussion of this PEP on > python-dev or python-list, nor do I recall a BDFL decision on the > PEP. There has been lots of discussion about backwards compatibility, > but not much consensus. We can have the discussion now, then. In my opinion it is irresponsible to knowingly unleash backwards incompatibilities on the world with no warning. If people think Python is unstable it will negatively impact its growth much more than the delay of some esoteric features. Let me put the ball back in your court: Is the benefit provided by having nested scopes this year rather than next year worth the pain of howls of outrage in Python-land. If we give people a year to upgrade (with warning messages) they will (rightly) grumble but not scream. -- Vote for Your Favorite Python & Perl Programming Accomplishments in the first Active Awards! http://www.ActiveState.com/Awards From jeremy at alum.mit.edu Wed Feb 21 22:53:21 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 16:53:21 -0500 (EST) Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <3A9437AC.4B2C77E7@ActiveState.com> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> Message-ID: <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "PP" == Paul Prescod writes: PP> Jeremy Hylton wrote: >> >> I don't recall seeing any substanital discussion of this PEP on >> python-dev or python-list, nor do I recall a BDFL decision on the >> PEP. There has been lots of discussion about backwards >> compatibility, but not much consensus. PP> We can have the discussion now, then. In my opinion it is PP> irresponsible to knowingly unleash backwards incompatibilities PP> on the world with no warning. If people think Python is unstable PP> it will negatively impact its growth much more than the delay of PP> some esoteric features. You have a colorful way of writing :-). When we unleashed Python 2.1a1, there was a fair amount of discussion about nested scopes on python-dev and on python-list. The fact that code would break has been documented in the PEP since December, before the BDFL pronounced on it. Why didn't you say it was irresponsible then? <0.5 wink> If you're just repeating your earlier arguments, I apologize for the rhetoric :-). PP> Let me put the ball back in your court: PP> Is the benefit provided by having nested scopes this year rather PP> than next year worth the pain of howls of outrage in PP> Python-land. If we give people a year to upgrade (with warning PP> messages) they will (rightly) grumble but not scream. I've heard plenty of hypothetical howls and one real one, from Mark. The alpha testing hasn't resulted in a lot of other complaints. I just asked on c.l.py for problem reports and /F followed up with a script to help find problems. Let's see what the result is. I ran Fredrik's script over 4700 source files on my machine and found exactly four errors. Two were from old copies of the Python CVS tree; they've been fixed in the current tree. One was from Zope and another was an *old* jpython test case. Jeremy From thomas at xs4all.net Wed Feb 21 23:29:38 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 21 Feb 2001 23:29:38 +0100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14995.55080.928806.56317@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Wed, Feb 21, 2001 at 09:56:40AM -0500 References: <14995.8522.253084.230222@beluga.mojam.com> <20010220222936.A2477@newcnri.cnri.reston.va.us> <20010221074710.E13911@xs4all.nl> <14995.55080.928806.56317@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010221232938.O26620@xs4all.nl> On Wed, Feb 21, 2001 at 09:56:40AM -0500, Jeremy Hylton wrote: > A note of clarification seems important here: The restrictions are > not being introduced to simplify the implementation. They're being > introduced because there is no sensible meaning for code that uses > import * and nested scopes with free variables. There are two > possible meanings, each plausible and neither satisfying. I disagree. There are several ways to work around them, or the BDFL could just make a decision on what it should mean. The decision between using a local vrbl in an upper scope or a possible global is about as arbritrary as what 'if key in dict:' and 'for key in dict' should do. I personally think it should behave exactly like: def outer(x, y): a = ... from module import * def inner(x, y, z=a): ... used to behave (before it became illegal.) That also makes it easy to explain to people who already know the rule. A possibly more practical solution would be to explicitly require a keyword to declare vrbls that should be taken from an upper scope rather than the global scope. Or a new keyword to define a closure. (def closure NAME(): comes to mind.) Lots of alternatives available if the implementation of PEP227 can't be done without introducing backwards incompatibility and strange special cases. Because you have to admit (even though it's another hypothetical howl) that it is odd that a function would *stop functioning* when you change a lambda (or nested function) to use a closure, rather than the old hack: def inner(x): exec ... myprint = sys.stderr.write spam = lambda x, myprint=myprint: myprint(x*100) I don't *just* object to the backwards incompatibility, but also to the added complexity and the strange special cases, most of which were introduced (at my urging, I'll readily admit and for which I should and do appologize) to reduce the impact of the incompatibility. I do not believe the ability to leave out the default-argument-hack (if you don't use import-*/exec in the same function) is worth all that. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From thomas at xs4all.net Wed Feb 21 23:33:34 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 21 Feb 2001 23:33:34 +0100 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Wed, Feb 21, 2001 at 04:53:21PM -0500 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010221233334.B26647@xs4all.nl> On Wed, Feb 21, 2001 at 04:53:21PM -0500, Jeremy Hylton wrote: > When we unleashed Python 2.1a1, there was a fair amount of discussion > about nested scopes on python-dev and on python-list. Nested scopes weren't in 2.1a1, they were added between 2.1a1 and 2.1a2. > The fact that code would break has been documented in the PEP since > December, before the BDFL pronounced on it. The PEP only mentions one type of breakage, a local vrbl in an upper scope shadowing a global. It doesn't mention exec or from-module-import-*. I don't recall seeing a BDFL pronouncement on this issue, though I did whine about the whole thing from the start ;-P > I've heard plenty of hypothetical howls and one real one, from Mark. Don't forget that the std. library itself had to be fixed in several places, because it violated the reference manual. Doesn't that hint that there is much more code out there that uses it ? I found two instances myself in old first-attempt GUI scripts of mine, which I never finished and thus aren't worth much more than the hypothetical howls. This is like spanking the dog/kid for doing something bad he had no way of knowing was bad. You can't expect the dog or the kid to read up on federal law to make sure he isn't doing anything bad by accident. Besides from any real problems we'll see, the added wartiness (which is what the hypothetical howls are all about) does really matter. What are we trying to solve with nested scopes ? Anything other than the default-argument hack wart ? Aren't we adding more warts to fix that one wart ? -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From akuchlin at mems-exchange.org Wed Feb 21 23:41:41 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Wed, 21 Feb 2001 17:41:41 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <20010221233334.B26647@xs4all.nl>; from thomas@xs4all.net on Wed, Feb 21, 2001 at 11:33:34PM +0100 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> Message-ID: <20010221174141.B25792@ute.cnri.reston.va.us> On Wed, Feb 21, 2001 at 11:33:34PM +0100, Thomas Wouters wrote: >Besides from any real problems we'll see, the added wartiness (which is what >the hypothetical howls are all about) does really matter. What are we trying >to solve with nested scopes ? Anything other than the default-argument hack >wart ? Aren't we adding more warts to fix that one wart ? I wouldn't consider either nested scopes or the additional restrictions really warts. 'from...import *' is already somewhat frowned upon, and often people use exec in situations where something else would be a better solution (storing variable names in a dictionary instead of exec'ing 'varname=expr'). If we were starting from a clean slate, I'd say accepting nested scopes would be a no-brainer. Compatibility... ay, there's the rub! --amk From thomas at xs4all.net Wed Feb 21 23:47:22 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 21 Feb 2001 23:47:22 +0100 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <20010221174141.B25792@ute.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Wed, Feb 21, 2001 at 05:41:41PM -0500 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> <20010221174141.B25792@ute.cnri.reston.va.us> Message-ID: <20010221234722.C26647@xs4all.nl> On Wed, Feb 21, 2001 at 05:41:41PM -0500, Andrew Kuchling wrote: > Compatibility... ay, there's the rub! If you include 'ways of thinking' in 'compatibility', I'll agree. Many people are used to being able to use exec/from-foo-import-*, and consider it part of Python's wonderful flexibility and straightforwardness (I know I do, and all my python-proficient and python-learning colleagues do.) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From MarkH at ActiveState.com Wed Feb 21 23:55:34 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Thu, 22 Feb 2001 09:55:34 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <20010221232938.O26620@xs4all.nl> Message-ID: [Thomas W] > appologize) to reduce the impact of the incompatibility. I do not believe > the ability to leave out the default-argument-hack (if you don't use > import-*/exec in the same function) is worth all that. Ironically, I _fixed_ my original problem by _adding_ a default-argument-hack. This meant my lambda no longer used a global name but a local one. Well, I think it ironic anyway :) For the record, the only reason I had to use exec in that case was because the "new" module is not capable creating a new method. Trying to compile a block of code with a "return" statement but no function decl (to create a code object suitable for a method) fails at compile time. Like-sands-through-the-hourglass ly, Mark. From pedroni at inf.ethz.ch Thu Feb 22 00:25:15 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Thu, 22 Feb 2001 00:25:15 +0100 (MET) Subject: [Python-Dev] again on nested scopes and Backwards Incompatibility Message-ID: <200102212325.AAA20597@core.inf.ethz.ch> Hi. This my last effort for today ;). [Thomas Wouters] > On Wed, Feb 21, 2001 at 05:41:41PM -0500, Andrew Kuchling wrote: > > > Compatibility... ay, there's the rub! > > If you include 'ways of thinking' in 'compatibility', I'll agree. Many > people are used to being able to use exec/from-foo-import-*, and consider it > part of Python's wonderful flexibility and straightforwardness (I know I do, > and all my python-proficient and python-learning colleagues do.) > 1) I'm convinced that on the long run that both: - import * - exec without in should be deprecated, so we could start issueing warning with 2.1 or 2.2 and make them errors when people get annoyed by the warnings enough ;) This has nothing to do with nested scopes. So people have time to change their mind. 2) The actual implementation of nested scopes (with or without compatibilty hacks) is based on the assumption that - one can detect lexically scoped variables as up 2.0 python was able to detect local vars (without the need of explicit declarations) -, and this is pythonic and neat, so let's do it. But this thread and the matter of fact that with the implementation some old code is not more valid or behave in a different way shows that maybe (I say maybe) this assumption is not completely valid. It is clear too that this difference between reality and theory has not that big predictable consequences, it's just annoying for some among us. But a survey among users to detect the extent of this has started. But from the theoretical (and maybe PR?) viewpoint the difference exists. On the other hand the (potential) solution (wich I'm aware open some other subtle issues to discuss but keep old code working as it was) of using some kind of explicit declarations is a no-go, no-story. Yes is not that much pythonic... Is'nt it possible to be all happy? I'm wondering if we have not transformed in an holy war a problem that offer at least some space for a technical discussion. regards, Samuele Pedroni. PS: sorry for my abuse of we given that I'm jython devel not a python one, but it is already difficult so... I feel I'm missing something about this group dynamics. From paulp at ActiveState.com Thu Feb 22 00:40:12 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Wed, 21 Feb 2001 15:40:12 -0800 Subject: [Python-Dev] Backwards Incompatibility References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <3A9451DC.143C5FCC@ActiveState.com> Jeremy Hylton wrote: > >... > > Why didn't you say it was irresponsible then? <0.5 wink> If you're > just repeating your earlier arguments, I apologize for the rhetoric > :-). I haven't followed this PEP at all. I think the feature is neat and I would like it. But to the average person, this is a pretty esoteric issue. But I do think that we should have a general principle that we do not knowingly break code without warning. It doesn't matter what the particular PEP is. It doesn't matter whether I like it. The reason I wrote the backwards compatibility PEP as not to restrict change but to enable it. If people trust us (they do not yet) then we can discuss long-term migration paths that may break code but they will be comfortable that they will have plenty of opportunity to move into the new world. So we could decide to change the keyword "def" to "define" and people would know that the change over would take a couple of years and they would be able to get from here to there. -- Vote for Your Favorite Python & Perl Programming Accomplishments in the first Active Awards! http://www.ActiveState.com/Awards From skip at mojam.com Thu Feb 22 00:13:46 2001 From: skip at mojam.com (Skip Montanaro) Date: Wed, 21 Feb 2001 17:13:46 -0600 (CST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> References: <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> <14996.10912.667104.603750@beluga.mojam.com> <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14996.19370.133024.802787@beluga.mojam.com> Jeremy> The question, then, is whether some amount of incompatible Jeremy> change is acceptable in the 2.1 release. I think of 2.1 as a minor release. Minor releases generally equate in my mind with bug fixes, not significant functionality changes or potential compatibility problems. I think many other people feel the same way. Earlier this month I suggested that adopting a release numbering scheme similar to that used for the Linux kernel would be appropriate. Perhaps it's not so much the details of the numbering as the up-front statement of something like, "version numbers like x.y where y is even represent stable releases" or, "backwards incompatibility will only be introduced when the major version number is incremented". It's more that there is a statement about stability vs new features that serves as a published committment the user community can rely on. After all the changes that made it into 2.0, I don't think anyone to have to address compatibility problems with 2.1. Skip From greg at cosc.canterbury.ac.nz Thu Feb 22 01:04:53 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Thu, 22 Feb 2001 13:04:53 +1300 (NZDT) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Message-ID: <200102220004.NAA01374@s454.cosc.canterbury.ac.nz> > Trying to compile a > block of code with a "return" statement but no function decl (to create a > code object suitable for a method) fails at compile time. Maybe you could add a dummy function header, compile that, and extract the code object from the resulting function object? Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From guido at digicool.com Thu Feb 22 01:11:07 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 19:11:07 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 19:01:05 +0100." <002301c09c30$46a89330$e46940d5@hagrid> References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> Message-ID: <200102220011.TAA12030@cj20424-a.reston1.va.home.com> > Is it time to shut down python-dev? (yes, I'm serious) I've been out in meetings all day, and just now checking my email. I'm a bit surprised by this sudden uprising. From the complaints so far, I don't really believe it's so bad. The embargo on not breaking code has never been absolute in my view. I do want to minimize breakage, but in the end my goal is to make people happy -- trying not to break code is only a means to that goal. It so happens that nested scopes will make many people happy too (if only because it allows references to surrounding locals from nested lambdas). I also don't mind as much breaking code that I consider ugly. I find import * inside a function very ugly (because I happen to know how much time it wastes). I find exec (without the ``in dict1, dict2'' clause) also pretty ugly, and usually being misused. I don't want to roll back nested scopes unless there's a lot more evidence that they are evil. Go through the PythonWare code base and look for code that would break -- and report back in the same style that Jeremy used. (Jeremy, it would help if you provided the tool you used for this analysis.) I remember you complained loudly about requiring list.append((x, y)) and socket.connect((host, port)) too -- but once you had fixed your code I didn't hear from you again, and I haven't had much feedback that this is a problem for the general population either. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 01:12:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 19:12:11 -0500 Subject: [Python-Dev] RE: Update to PEP 232 In-Reply-To: Your message of "Wed, 21 Feb 2001 10:06:34 GMT." <000901c09bed$f861d750$f05aa8c0@lslp7o.int.lsl.co.uk> References: <000901c09bed$f861d750$f05aa8c0@lslp7o.int.lsl.co.uk> Message-ID: <200102220012.TAA12047@cj20424-a.reston1.va.home.com> > Small pedantry (there's another sort?) > > I note that: > > > - __doc__ is the only function attribute that currently has > > syntactic support for conveniently setting. It may be > > worthwhile to eventually enhance the language for supporting > > easy function attribute setting. Here are some syntaxes > > suggested by PEP reviewers: > [...elided to save space!...] > > It isn't currently clear if special syntax is necessary or > > desirable. > > has not been changed since the last version of the PEP. I suggest that > it be updated in two ways: > > 1. Clarify the final statement - I seem to have the impression (sorry, > can't find a message to back it up) that either the BDFL or Tim Peters > is very against anything other than the "simple" #f.a = 1# sort of > thing - unless I'm mischannelling (?) again. Agreed. > 2. Reference the thread/idea a little while back that ended with #def > f(a,b) having (publish=1)# - it's certainly no *worse* than the > proposals in the PEP! (Michael Hudson got as far as a patch, I think). Sure, reference it. It will never be added while I'm in charge though. --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Wed Feb 21 23:30:54 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 17:30:54 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: References: <20010221232938.O26620@xs4all.nl> Message-ID: <14996.16798.393875.480264@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "MH" == Mark Hammond writes: MH> [Thomas W] >> appologize) to reduce the impact of the incompatibility. I do not >> believe the ability to leave out the default-argument-hack (if >> you don't use import-*/exec in the same function) is worth all >> that. MH> Ironically, I _fixed_ my original problem by _adding_ a MH> default-argument-hack. This meant my lambda no longer used a MH> global name but a local one. MH> Well, I think it ironic anyway :) I think it's ironic, too! I laughed when I read your message. MH> For the record, the only reason I had to use exec in that case MH> was because the "new" module is not capable creating a new MH> method. Trying to compile a block of code with a "return" MH> statement but no function decl (to create a code object suitable MH> for a method) fails at compile time. For the record, I realize that there is no reason for the compiler to complain about the code you wrote. If exec supplies an explicit namespace, then everything is hunky-dory. Assuming Guido agrees, I'll fix this ASAP. Jeremy From jeremy at alum.mit.edu Wed Feb 21 23:32:59 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 21 Feb 2001 17:32:59 -0500 (EST) Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <14996.19370.133024.802787@beluga.mojam.com> References: <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> <14996.10912.667104.603750@beluga.mojam.com> <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> <14996.19370.133024.802787@beluga.mojam.com> Message-ID: <14996.16923.805683.428420@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro writes: Jeremy> The question, then, is whether some amount of incompatible Jeremy> change is acceptable in the 2.1 release. SM> I think of 2.1 as a minor release. Minor releases generally SM> equate in my mind with bug fixes, not significant functionality SM> changes or potential compatibility problems. I think many other SM> people feel the same way. Fair enough. It sounds like you are concerned, on general grounds, about incompatible changes and the specific exec/import issues aren't any more or less important than the other compatibility issues. I don't think I agree with you, but I'll sit on it for a few days and see what real problem reports there are. thinking-there-will-be-lots-to-talk-about-at-the-conference-ly y'rs, Jeremy From tim.one at home.com Thu Feb 22 01:58:34 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 21 Feb 2001 19:58:34 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <002301c09c30$46a89330$e46940d5@hagrid> Message-ID: [/F] > Is it time to shut down python-dev? (yes, I'm serious) I can't imagine that it would be possible to have such a vigorous and focused debate about Python development in the absence of Python-Dev. That is, this is exactly the kind of thing for which Python-Dev is *most* needed! People disagreeing isn't exactly a new phenomenon ... From tim.one at home.com Thu Feb 22 02:02:37 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 21 Feb 2001 20:02:37 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102211854.TAA12664@core.inf.ethz.ch> Message-ID: BTW, are people similarly opposed to that comparisons can now raise exceptions? It's been mentioned a few times on c.l.py this week, but apparently not (yet) by people who bumped into it in practice. From guido at digicool.com Thu Feb 22 02:28:31 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:28:31 -0500 Subject: [Python-Dev] Re: dl module In-Reply-To: Your message of "Wed, 21 Feb 2001 15:02:33 EST." References: Message-ID: <200102220128.UAA12546@cj20424-a.reston1.va.home.com> > On 10 Feb, GvR quoted and wrote: > >> Skip Montanaro writes: > >> > MAL> The same could be done for e.g. soundex ... > >> > >> Fred Drake wrote: > >> Given that Skip has published this module and that the C version can > >> always be retrieved from CVS if anyone really wants it, and that > >> soundex has been listed in the "Obsolete Modules" section in the > >> documentation for quite some time, this is probably a good time to > >> remove it from the source distribution. > > > >Yes, go ahead. > > Guido, did you mean go ahead and remove soundex, or the dl module, or > both? Soundex. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 02:30:37 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:30:37 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 21:17:56 +0100." <00cf01c09c43$60e360f0$e000a8c0@thomasnotebook> References: <200102211446.PAA07183@core.inf.ethz.ch><20010221095625.A29605@ute.cnri.reston.va.us><00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <002301c09c30$46a89330$e46940d5@hagrid> <20010221191317.A26647@xs4all.nl> <20010221143323.B1441@ute.cnri.reston.va.us> <00cf01c09c43$60e360f0$e000a8c0@thomasnotebook> Message-ID: <200102220130.UAA12562@cj20424-a.reston1.va.home.com> > I would consider the type/class split, making something > like ExtensionClass neccessary, much more annoying for > the advanced programmer. IMHO more efforts should go > into this issue _even before_ p3000. Yes, indeed. This will be on the agenda for Python 2.2. Digital Creations really wants PythonLabs to work on this issue! --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 02:36:29 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:36:29 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: Your message of "Wed, 21 Feb 2001 13:48:28 PST." <3A9437AC.4B2C77E7@ActiveState.com> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> Message-ID: <200102220136.UAA12628@cj20424-a.reston1.va.home.com> > We can have the discussion now, then. In my opinion it is irresponsible > to knowingly unleash backwards incompatibilities on the world with no > warning. If people think Python is unstable it will negatively impact > its growth much more than the delay of some esoteric features. Let me > put the ball back in your court: You should be talking, Mr. 8-bit-strings-should-always-be-considered- Latin-1. ;-) > Is the benefit provided by having nested scopes this year rather than > next year worth the pain of howls of outrage in Python-land. If we give > people a year to upgrade (with warning messages) they will (rightly) > grumble but not scream. But people *do* have a year's warning. Most people probably wait that much before they upgrade. (Half jokingly, half annoyed. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 02:42:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:42:11 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 23:29:38 +0100." <20010221232938.O26620@xs4all.nl> References: <14995.8522.253084.230222@beluga.mojam.com> <20010220222936.A2477@newcnri.cnri.reston.va.us> <20010221074710.E13911@xs4all.nl> <14995.55080.928806.56317@w221.z064000254.bwi-md.dsl.cnc.net> <20010221232938.O26620@xs4all.nl> Message-ID: <200102220142.UAA12670@cj20424-a.reston1.va.home.com> > On Wed, Feb 21, 2001 at 09:56:40AM -0500, Jeremy Hylton wrote: > > > A note of clarification seems important here: The restrictions are > > not being introduced to simplify the implementation. They're being > > introduced because there is no sensible meaning for code that uses > > import * and nested scopes with free variables. There are two > > possible meanings, each plausible and neither satisfying. > > I disagree. There are several ways to work around them, or the BDFL could > just make a decision on what it should mean. Since import * is already illegal according to the reference manual, that's an easy call: I pronounce that it's illegal. For b/w compatibility we'll try to allow it in as many situations as possible where it's not ambiguous. > I don't *just* object to the backwards incompatibility, but also to the > added complexity and the strange special cases, most of which were > introduced (at my urging, I'll readily admit and for which I should and do > appologize) to reduce the impact of the incompatibility. I do not believe > the ability to leave out the default-argument-hack (if you don't use > import-*/exec in the same function) is worth all that. The strange special cases should not remain a permanent wart in the language; rather, import * in functions should be considered deprecated. In 2.2 we should issue a warning for this in most cases. (Is there as much as a hassle with exec? IMO exec without an in-clause should also be deprecated.) --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 02:45:10 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:45:10 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: Your message of "Wed, 21 Feb 2001 23:47:22 +0100." <20010221234722.C26647@xs4all.nl> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> <20010221174141.B25792@ute.cnri.reston.va.us> <20010221234722.C26647@xs4all.nl> Message-ID: <200102220145.UAA12690@cj20424-a.reston1.va.home.com> > On Wed, Feb 21, 2001 at 05:41:41PM -0500, Andrew Kuchling wrote: > > > Compatibility... ay, there's the rub! > > If you include 'ways of thinking' in 'compatibility', I'll agree. Many > people are used to being able to use exec/from-foo-import-*, and consider it > part of Python's wonderful flexibility and straightforwardness (I know I do, > and all my python-proficient and python-learning colleagues do.) Actually, I've always considered 'exec' mostly one of those must-have- because-the-competition-has-it features. Language theorists love it. In practice, bare exec not that useful; a more restricted form (e.g. one that always requires the caller to explicitly pass in an environment) makes much more sense. As for import *, we all know that it's an abomination... --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 02:46:35 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 20:46:35 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Thu, 22 Feb 2001 09:55:34 +1100." References: Message-ID: <200102220146.UAA12705@cj20424-a.reston1.va.home.com> > For the record, the only reason I had to use exec in that case was because > the "new" module is not capable creating a new method. Trying to compile a > block of code with a "return" statement but no function decl (to create a > code object suitable for a method) fails at compile time. I don't understand. Methods do have a function declaration: class C: def meth(self): pass Or am I misunderstanding? --Guido van Rossum (home page: http://www.python.org/~guido/) From MarkH at ActiveState.com Thu Feb 22 03:02:28 2001 From: MarkH at ActiveState.com (Mark Hammond) Date: Thu, 22 Feb 2001 13:02:28 +1100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102220146.UAA12705@cj20424-a.reston1.va.home.com> Message-ID: [Guido] > I don't understand. Methods do have a function declaration: > > class C: > > def meth(self): > pass > > Or am I misunderstanding? The problem is I have a class object, and the source-code for the method body as a string, generated at runtime based on runtime info from the reflection capabilities of the system we are interfacing to. The simplest example is for method code of "return None". I dont know how to get a code object for this snippet so I can use the new module to get a new method object. Attempting to compile this string gives a syntax error. There was some discussion a few years ago that adding "function" as a "compile type" may be an option, but I never progressed it. So my solution is to create a larger string that includes the method declaration, like: """def foo(self): return None """ exec that, get the function object out of the exec'd namespace and inject it into the class. Mark. From guido at digicool.com Thu Feb 22 03:07:49 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 21:07:49 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Thu, 22 Feb 2001 13:02:28 +1100." References: Message-ID: <200102220207.VAA12996@cj20424-a.reston1.va.home.com> > [Guido] > > > I don't understand. Methods do have a function declaration: > > > > class C: > > > > def meth(self): > > pass > > > > Or am I misunderstanding? [Mark] > The problem is I have a class object, and the source-code for the method > body as a string, generated at runtime based on runtime info from the > reflection capabilities of the system we are interfacing to. The simplest > example is for method code of "return None". > > I dont know how to get a code object for this snippet so I can use the new > module to get a new method object. Attempting to compile this string gives > a syntax error. There was some discussion a few years ago that adding > "function" as a "compile type" may be an option, but I never progressed it. > > So my solution is to create a larger string that includes the method > declaration, like: > > """def foo(self): > return None > """ > > exec that, get the function object out of the exec'd namespace and inject it > into the class. Aha, I see. That's how I would have done it too. I admit that it's attractive to exec this in the local namespace and then simply use the local variable 'foo', but that doesn't quite work, so 'exec...in...' is the right thing to do anyway. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 03:11:51 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 21:11:51 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 19:54:06 +0100." <200102211854.TAA12664@core.inf.ethz.ch> References: <200102211854.TAA12664@core.inf.ethz.ch> Message-ID: <200102220211.VAA13014@cj20424-a.reston1.va.home.com> > I should admit that I like the idea of nested scopes, because I like functional > programming style, but I don't know whether this returning 3 is nice ;)? > > def f(): > def g(): > return y > # put as many innoncent code lines as you like > y=3 > return g() This is a red herring; I don't see how this differs from the confusion in def f(): print y # lots of code y = 3 and I don't see how nested scopes add a new twist to this known issue. > It really seems that there's not been enough discussion about the change, Maybe, > and I think that is also ok to honestely be worried about what user > will feel about this? (and we can only think about this beacuse > the feedback is not that much) FUD. > Will this code breakage "scare" them and slow down migration to new versions > of python? They are already afraid of going 2.0(?). It is maybe just PR matter > but ... More FUD. > The *point* is that we are not going from version 0.8 to version 0.9 > of our toy research lisp dialect, passing from dynamic scoping to lexical > scoping. (Yes, I think, that changing semantic behind the scene is not > a polite move.) Well, I'm actually glad to hear this -- Python now has such a large user base that language changes are deemed impractical. > We really need the BDFL proposing the right thing. We'll discuss this more at the PythonLabs group meeting. For now, I prefer to move forward with nested scopes, breaking code and all. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 03:24:31 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 21 Feb 2001 21:24:31 -0500 Subject: [Python-Dev] Strange import behaviour, recently introduced In-Reply-To: Your message of "Wed, 21 Feb 2001 17:39:09 +0100." <036b01c09c24$d0aa20a0$e000a8c0@thomasnotebook> References: <20010221150634.AB6ED371690@snelboot.oratrix.nl> <036b01c09c24$d0aa20a0$e000a8c0@thomasnotebook> Message-ID: <200102220224.VAA13210@cj20424-a.reston1.va.home.com> > Jack Jansen wrote: > > This week I noticed that these resource imports have suddenly > > become very very slow. Whereas startup time of my application used > > to be around 2 seconds (where the non-frozen version took 6 > > seconds) it now takes almost 20 times as long. The non-frozen > > version still takes 6 seconds. [Thomas Heller] > The most recent version calls PyImport_ImportModuleEx() for > '__builtin__' for every import of __builtin__ without caching the > result in a static variable. > > Can this be the cause? Would this help? *** import.c 2001/02/20 21:43:24 2.162 --- import.c 2001/02/22 02:24:55 *************** *** 1873,1878 **** --- 1873,1879 ---- { static PyObject *silly_list = NULL; static PyObject *builtins_str = NULL; + static PyObject *builtin_str = NULL; static PyObject *import_str = NULL; PyObject *globals = NULL; PyObject *import = NULL; *************** *** 1887,1892 **** --- 1888,1896 ---- builtins_str = PyString_InternFromString("__builtins__"); if (builtins_str == NULL) return NULL; + builtin_str = PyString_InternFromString("__builtin__"); + if (builtin_str == NULL) + return NULL; silly_list = Py_BuildValue("[s]", "__doc__"); if (silly_list == NULL) return NULL; *************** *** 1902,1913 **** } else { /* No globals -- use standard builtins, and fake globals */ PyErr_Clear(); ! builtins = PyImport_ImportModuleEx("__builtin__", ! NULL, NULL, NULL); if (builtins == NULL) return NULL; globals = Py_BuildValue("{OO}", builtins_str, builtins); if (globals == NULL) goto err; --- 1906,1918 ---- } else { /* No globals -- use standard builtins, and fake globals */ + PyInterpreterState *interp = PyThreadState_Get()->interp; PyErr_Clear(); ! builtins = PyDict_GetItem(interp->modules, builtin_str); if (builtins == NULL) return NULL; + Py_INCREF(builtins); globals = Py_BuildValue("{OO}", builtins_str, builtins); if (globals == NULL) goto err; --Guido van Rossum (home page: http://www.python.org/~guido/) From thomas at xs4all.net Thu Feb 22 09:00:47 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Thu, 22 Feb 2001 09:00:47 +0100 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <200102220145.UAA12690@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Wed, Feb 21, 2001 at 08:45:10PM -0500 References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> <20010221174141.B25792@ute.cnri.reston.va.us> <20010221234722.C26647@xs4all.nl> <200102220145.UAA12690@cj20424-a.reston1.va.home.com> Message-ID: <20010222090047.P26620@xs4all.nl> On Wed, Feb 21, 2001 at 08:45:10PM -0500, Guido van Rossum wrote: > > On Wed, Feb 21, 2001 at 05:41:41PM -0500, Andrew Kuchling wrote: > Actually, I've always considered 'exec' mostly one of those must-have- > because-the-competition-has-it features. Language theorists love it. > In practice, bare exec not that useful; a more restricted form > (e.g. one that always requires the caller to explicitly pass in an > environment) makes much more sense. > As for import *, we all know that it's an abomination... Okay, I can live with that, but can we please have at least one release between "these are cool features and we use them in the std. library ourselves" and "no no you bad boy!" ? Or fork Python 3.0, move nested scopes to that, and release it parallel to 2.1 ? -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tony at lsl.co.uk Thu Feb 22 10:02:51 2001 From: tony at lsl.co.uk (Tony J Ibbs (Tibs)) Date: Thu, 22 Feb 2001 09:02:51 -0000 Subject: [Python-Dev] RE: Update to PEP 232 In-Reply-To: <200102220012.TAA12047@cj20424-a.reston1.va.home.com> Message-ID: <001b01c09cae$3c3fa360$f05aa8c0@lslp7o.int.lsl.co.uk> Guido responded to my points thus: > > 1. Clarify the final statement - I seem to have the > > impression (sorry, can't find a message to back it up) > > that either the BDFL or Tim Peters is very against > > anything other than the "simple" #f.a = 1# sort of > > thing - unless I'm mischannelling (?) again. > > Agreed. That's a relief - I obviously had "heard" right! > > 2. Reference the thread/idea a little while back that ended > > with #def > f(a,b) having (publish=1)# ... > > Sure, reference it. It will never be added while I'm in charge > though. Well, I'd kind of assumed that, given my "memory" of the first point. But of the schemes that won't be adopted, that's the one *I* preferred. (my own sense of "locality" means that I would prefer to be placing function attributes near the declaration of the function, especially given my penchant for long docstrings which move the end of the function off-screen. But then I haven't *used* them yet, and I assume this sort of point has been taken into account. And anyway I definitely prefer your sense of language design to mine). Keep on trying not to get run over by buses, and thanks again for the neat language, Tibs -- Tony J Ibbs (Tibs) http://www.tibsnjoan.co.uk/ "Bounce with the bunny. Strut with the duck. Spin with the chickens now - CLUCK CLUCK CLUCK!" BARNYARD DANCE! by Sandra Boynton My views! Mine! Mine! (Unless Laser-Scan ask nicely to borrow them.) From fredrik at effbot.org Thu Feb 22 11:18:21 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 22 Feb 2001 11:18:21 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: Message-ID: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> Tim wrote: > [/F] > > Is it time to shut down python-dev? (yes, I'm serious) > > I can't imagine that it would be possible to have such a vigorous and > focused debate about Python development in the absence of Python-Dev. If a debate doesn't lead anywhere, it's just a waste of time. Code monkey contributions can be handled via sourceforge, and general whining works just as well on comp.lang.python. ::: Donning my devil's advocate suite, here are some recent observations: - Important decisions are made on internal PythonLabs meetings (unit testing, the scope issue, etc), not by an organized python- dev process. Does anyone care about -1 and +1's anymore? - The PEP process isn't working ("I updated the PEP and checked in the code", "but *that* PEP doesn't apply to *me*", etc). - Impressive hacks are more important than concerns from people who make their living selling Python technology (rather than a specific application). Codewise, nested scopes are amazing. From a marketing perspective, it's a disaster. (even more absurd allegations snipped) Am I entirely wrong? Cheers /F From fredrik at effbot.org Thu Feb 22 10:48:49 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 22 Feb 2001 10:48:49 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: Message-ID: <029901c09cbb$a31cb980$e46940d5@hagrid> > BTW, are people similarly opposed to that comparisons can now raise > exceptions? It's been mentioned a few times on c.l.py this week, but > apparently not (yet) by people who bumped into it in practice. but that's not a new thing in 2.1, is it? Python 1.5.2 (#0, May 9 2000, 14:04:03) [MSC 32 bit (Intel)] on win32 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> class spam: ... def __cmp__(self, other): ... raise "Hi tim!" ... >>> a = [spam(), spam(), spam()] >>> a.sort() Traceback (innermost last): File " ", line 1, in ? File " ", line 3, in __cmp__ Hi tim! Cheers /F From fredrik at effbot.org Thu Feb 22 11:38:45 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 22 Feb 2001 11:38:45 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <200102211854.TAA12664@core.inf.ethz.ch> <200102220211.VAA13014@cj20424-a.reston1.va.home.com> Message-ID: <029d01c09cbb$a44fe250$e46940d5@hagrid> Guido van Rossum wrote: > > and I think that is also ok to honestely be worried about what user > > will feel about this? (and we can only think about this beacuse > > the feedback is not that much) > > FUD. > > > Will this code breakage "scare" them and slow down migration to new versions > > of python? They are already afraid of going 2.0(?). It is maybe just PR matter > > but ... > > More FUD. but FUD is what we have to deal with on the market. I know from my 2.0 experiences that lots of people are concerned about even small changes (more ways to do it isn't always what a large organization wants). Pointing out that "hey, it's a major release" or "you can ignore the new features, and pretend it's just a better 1.5.2" helps a little bit, but the scepticism is still there. And here we have something that breaks code, breaks tools, breaks training material, and breaks books. "Everything you know about Python scoping is wrong. Get over it". The more I think about it, the less I think it belongs in any version before 3.0. Cheers /F From fredrik at effbot.org Thu Feb 22 11:40:29 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 22 Feb 2001 11:40:29 +0100 Subject: [Python-Dev] Backwards Incompatibility References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> <20010221174141.B25792@ute.cnri.reston.va.us> <20010221234722.C26647@xs4all.nl> <200102220145.UAA12690@cj20424-a.reston1.va.home.com> <20010222090047.P26620@xs4all.nl> Message-ID: <02b201c09cbc$2a266d40$e46940d5@hagrid> Thomas wrote: > Okay, I can live with that, but can we please have at least one release > between "these are cool features and we use them in the std. library > ourselves" and "no no you bad boy!" ? Or fork Python 3.0, move nested > scopes to that, and release it parallel to 2.1 ? hey, that would mean that we can once again release two versions on the same day! (or why not three: 1.6.1, 2.1, and 3.0! ;-) Cheers /F From mal at lemburg.com Thu Feb 22 12:21:33 2001 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu, 22 Feb 2001 12:21:33 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> Message-ID: <3A94F63D.25FF8595@lemburg.com> Fredrik Lundh wrote: > > Tim wrote: > > > [/F] > > > Is it time to shut down python-dev? (yes, I'm serious) > > > > I can't imagine that it would be possible to have such a vigorous and > > focused debate about Python development in the absence of Python-Dev. > > If a debate doesn't lead anywhere, it's just a waste of time. > > Code monkey contributions can be handled via sourceforge, > and general whining works just as well on comp.lang.python. Na, Fredrik, we wouldn't want to lose our nice little chat room -- it's way too much fun around here :-) > ::: > > Donning my devil's advocate suite, here are some recent observations: > > - Important decisions are made on internal PythonLabs meetings > (unit testing, the scope issue, etc), not by an organized python- > dev process. Does anyone care about -1 and +1's anymore? Well, being one of the first opponents of nested scopes (nobody else seemed to care back then...) and seeing how many of those other obscure PEPs made their way into the core, I have similar feelings. Still, I see the voting system as being a democratic method of reaching consensus: if there only one -1 and half a dozen +1s then I am overruled. > - The PEP process isn't working ("I updated the PEP and checked > in the code", "but *that* PEP doesn't apply to *me*", etc). Aren't PEPs meant to store information gathered in ongoing discussions rather than being an official statement of consent ? > - Impressive hacks are more important than concerns from people > who make their living selling Python technology (rather than a > specific application). Codewise, nested scopes are amazing. > From a marketing perspective, it's a disaster. Agreed and I have never understood why getting lambdas to work without keyword hacks is motivation enough to break code in all kinds of places. The nested scopes thingie started out as simple idea, but has in time grown so many special cases that I think the idea has already proven all by itself that it is the wrong approach to the problem (if there ever was a problem -- lambdas are certainly not newbie style gadgets). -- Marc-Andre Lemburg ______________________________________________________________________ Company & Consulting: http://www.egenix.com/ Python Pages: http://www.lemburg.com/python/ From guido at digicool.com Thu Feb 22 14:13:00 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 08:13:00 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: Your message of "Thu, 22 Feb 2001 09:00:47 +0100." <20010222090047.P26620@xs4all.nl> References: <200102211446.PAA07183@core.inf.ethz.ch> <20010221095625.A29605@ute.cnri.reston.va.us> <00ca01c09c28$70ea44c0$e46940d5@hagrid> <14995.62634.894979.83805@w221.z064000254.bwi-md.dsl.cnc.net> <3A9437AC.4B2C77E7@ActiveState.com> <14996.14545.932710.305181@w221.z064000254.bwi-md.dsl.cnc.net> <20010221233334.B26647@xs4all.nl> <20010221174141.B25792@ute.cnri.reston.va.us> <20010221234722.C26647@xs4all.nl> <200102220145.UAA12690@cj20424-a.reston1.va.home.com> <20010222090047.P26620@xs4all.nl> Message-ID: <200102221313.IAA15384@cj20424-a.reston1.va.home.com> > > As for import *, we all know that it's an abomination... > > Okay, I can live with that, but can we please have at least one release > between "these are cool features and we use them in the std. library > ourselves" and "no no you bad boy!" ? Or fork Python 3.0, move nested scopes > to that, and release it parallel to 2.1 ? Of course. We're not making it illegal yet, except in some highly specific circumstances where IMO the goal justifies the means. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 14:15:36 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 08:15:36 -0500 Subject: [Python-Dev] again on nested scopes and Backwards Incompatibility In-Reply-To: Your message of "Thu, 22 Feb 2001 00:25:15 +0100." <200102212325.AAA20597@core.inf.ethz.ch> References: <200102212325.AAA20597@core.inf.ethz.ch> Message-ID: <200102221315.IAA15405@cj20424-a.reston1.va.home.com> > PS: sorry for my abuse of we given that I'm jython devel not a python one, > but it is already difficult so... I feel I'm missing something about > this group dynamics. Hey Samuele, don't worry about the group dynamics. You're doing fine, and the group will survive. We've had heated debates before, and we've always come out for the better. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 14:20:01 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 08:20:01 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 20:02:37 EST." References: Message-ID: <200102221320.IAA15469@cj20424-a.reston1.va.home.com> > BTW, are people similarly opposed to that comparisons can now raise > exceptions? It's been mentioned a few times on c.l.py this week, but > apparently not (yet) by people who bumped into it in practice. That's not exactly news though, is it? Comparisons have been raising exceptions since, oh, Python 1.4 at least. --Guido van Rossum (home page: http://www.python.org/~guido/) From pedroni at inf.ethz.ch Thu Feb 22 14:22:25 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Thu, 22 Feb 2001 14:22:25 +0100 (MET) Subject: [Python-Dev] Those import related syntax errors again... Message-ID: <200102221322.OAA07627@core.inf.ethz.ch> Hi. I have learned that I should not play diplomacy between people that make money out of software. I partecipated to the discussion for two reasons: - I want to avoid an ugly to implement solution (I'm the guy that should code nested scopes in jython) - I got annoyed by Jeremy using his "position" and (your) BDFL decisions and the fact that code is already in, in order to avoid to be completely intellectually honest wrt to his creature. (But CLEARLY this was just my feeling, and getting annoyed is a feeling too) > > > I should admit that I like the idea of nested scopes, because I like functional > > programming style, but I don't know whether this returning 3 is nice ;)? > > > > def f(): > > def g(): > > return y > > # put as many innoncent code lines as you like > > y=3 > > return g() > This works. > This is a red herring; I don't see how this differs from the confusion > in > > def f(): > print y > # lots of code > y = 3 > > and I don't see how nested scopes add a new twist to this known issue. > This raises an error (at least at runtime). But yes it is just matter of taste and readability, mostly personal stuff. And on the long run maybe the second should raise a compile-time error (your choice). > > and I think that is also ok to honestely be worried about what user > > will feel about this? (and we can only think about this beacuse > > the feedback is not that much) > > FUD. > > > Will this code breakage "scare" them and slow down migration to new versions > > of python? They are already afraid of going 2.0(?). It is maybe just PR matter > > but ... > > More FUD. > Hey, I don't make money out of python or jython. I not invoked FUD, I was just pointing out what - I thought - was behind the discussion. FUD is already among us but you and the others make money with python, this is not the case for me. > > The *point* is that we are not going from version 0.8 to version 0.9 > > of our toy research lisp dialect, passing from dynamic scoping to lexical > > scoping. (Yes, I think, that changing semantic behind the scene is not > > a polite move.) > > Well, I'm actually glad to hear this -- Python now has such a large > user base that language changes are deemed impractical. > I'm just a newbie, I always read in books and e-articles: "python is a simple, elegant, consistent language, developed (slowly) with extremal care". It's all about being intellectually honest (yes this is my personal holy war): e.g. [GvR] > > I would consider the type/class split, making something > > like ExtensionClass neccessary, much more annoying for > > the advanced programmer. IMHO more efforts should go > > into this issue _even before_ p3000. > > Yes, indeed. This will be on the agenda for Python 2.2. Digital > Creations really wants PythonLabs to work on this issue! this is an honest statement. Things has changed (people are getting aware of this). With nested scope there were two possibilities: given the code: (I) y=1 def f(): y=666 def g(): return y one could go the way we are going and breaks this unless people fix it (II) y=1 def f(): y=666 def g(): global y return y or need some explicit syntax for the new behaviour: (III) y=1 def f(): nest y y=666 def g(): return y I agree designing solution (III) could be not simpler, and on the long run is just inelegant lossage (I can agree with this) up to other orthogonal issues (see above). Python is not closed source, it's your language, your user-base and you make money indirectly out of it: you are the BDFL and you can choose (if you would make money directly out of python maybe you must choose (III) or you are MS or Sun...) But I think it's clear that you should accept people (for their biz reason) saying "please can we go slower". And you can reply FUD... regards, Samuele Pedroni. PS: Yes I will not play this anymore. Lesson learned ;) From guido at digicool.com Thu Feb 22 14:28:27 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 08:28:27 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Wed, 21 Feb 2001 17:13:46 CST." <14996.19370.133024.802787@beluga.mojam.com> References: <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> <14996.10912.667104.603750@beluga.mojam.com> <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> <14996.19370.133024.802787@beluga.mojam.com> Message-ID: <200102221328.IAA15503@cj20424-a.reston1.va.home.com> > Jeremy> The question, then, is whether some amount of incompatible > Jeremy> change is acceptable in the 2.1 release. > > I think of 2.1 as a minor release. Minor releases generally equate in my > mind with bug fixes, not significant functionality changes or potential > compatibility problems. I think many other people feel the same way. Hm, I disagree. Remember, back in the days of Python 1.x, we introduced new stuff even with micro releases (1.5.2 had a lot of stuff that 1.5.1 didn't). My "feel" for Python version numbers these days is that the major number only needs to be bumped for very serious reasons. We switched to 2.0 mostly for PR reasons, and I hope we can stay at 2.x for a while. Pure bugfix releases will have a 3rd numbering level; in fact there will eventually be a 2.0.1 release that fixes bugs only (including the GPL incompatibility bug in the license!). 2.x versions can introduce new things. We'll do our best to keep old code from breaking unnecessarily, but I don't want our success to stand in the way of progress, and I will allow some things to break occasionally if it serves a valid purpose. You may consider this a break with tradition -- so be it. If 2.1 really breaks too much code, we will fix the policy for 2.2, and do our darndest to fix the code in 2.1.1. > Earlier this month I suggested that adopting a release numbering scheme > similar to that used for the Linux kernel would be appropriate. Please no! Unless you make a living hacking Linux kernels, it's too hard to remember which is odd and which is even, because it's too arbitrary. > Perhaps it's not so much the details of the numbering as the > up-front statement of something like, "version numbers like x.y > where y is even represent stable releases" or, "backwards > incompatibility will only be introduced when the major version > number is incremented". It's more that there is a statement about > stability vs new features that serves as a published committment the > user community can rely on. After all the changes that made it into > 2.0, I don't think anyone to have to address compatibility problems > with 2.1. I don't want to slide into version number inflation. There's not enough new in 2.1 to call it 3.0. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Thu Feb 22 14:51:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 08:51:03 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Thu, 22 Feb 2001 11:18:21 +0100." <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> References: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> Message-ID: <200102221351.IAA15568@cj20424-a.reston1.va.home.com> > Donning my devil's advocate suite, here are some recent observations: > > - Important decisions are made on internal PythonLabs meetings > (unit testing, the scope issue, etc), not by an organized python- > dev process. Does anyone care about -1 and +1's anymore? Python-dev is as organized as its participants want it to be. It appeared that very few people (apart from you) were interested in unit testing, so we looked elsewhere. We found that others inside Digital Creations had lots of experience with PyUnit and really liked it. Without arguments, +1 and -1's indeed don't have that much weight. With the right argument, a single +1 or -1 can be sufficient. Python is (still) not a democracy. > - The PEP process isn't working ("I updated the PEP and checked > in the code", "but *that* PEP doesn't apply to *me*", etc). I wouldn't say it isn't working. I believe it's very helpful to have a working document checked in somewhere to augment the discussion, and the PEPs make progress possible where in the past we went around in circles in the list without ever coming to a conclusion. Forcing the proposer of a new feature to write a PEP is a good way to think through more of the consequences of a new idea. Referring to a PEP when arguments are repeated can cut short discussion. Note that the PEP work flow document (PEP 1) explicitly states that the BDFL has the final word. But of course sometimes the realities of software development catch up with us -- we can't possibly hope to do all design ahead of all implementation, and during testing we may discover important new facts that must affect the design. > - Impressive hacks are more important than concerns from people > who make their living selling Python technology (rather than a > specific application). Codewise, nested scopes are amazing. > From a marketing perspective, it's a disaster. Aha, now we're talking. Python is growing up, and more and more people are making money by supporting it. Obviously, businesspeople have to be more conservative than software developers. But do you *really* think that breaking the occasional exec-without-in-clause or from-import-* will affect a large enough portion of the user population to make a difference? People with a lot at stake tend to be slow in upgrading anyway. So we're releasing 2.1 mostly for the bleeding edge consumers -- e.g. Paul Barret recently announced that his institute is upgrading to 2.0 and doesn't plan to switch to 2.1 any time soon. That's fine with me. Hey, here's an idea. We could add the warning API to 2.0.1 (it's backwards compatible AFAIK), and you can release PY201 with warnings added for things that your customers will need to change before they switch to PY21. --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik at effbot.org Thu Feb 22 15:55:33 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Thu, 22 Feb 2001 15:55:33 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <019001c09bda$ffb6f4d0$e46940d5@hagrid> <14995.55347.92892.762336@w221.z064000254.bwi-md.dsl.cnc.net> <02a701c09c1b$40441e70$0900a8c0@SPIFF> <14995.60235.591304.213358@w221.z064000254.bwi-md.dsl.cnc.net> <14996.10912.667104.603750@beluga.mojam.com> <14996.11789.246222.237752@w221.z064000254.bwi-md.dsl.cnc.net> <14996.19370.133024.802787@beluga.mojam.com> <200102221328.IAA15503@cj20424-a.reston1.va.home.com> Message-ID: <04bb01c09cdf$85152750$e46940d5@hagrid> Guido wrote: > Hm, I disagree. Remember, back in the days of Python 1.x, we > introduced new stuff even with micro releases (1.5.2 had a lot of > stuff that 1.5.1 didn't). Last year, we upgraded a complex system from 1.2 to 1.5.2. Two modules broke; one didn't expect exceptions to be instances, and one messed up under the improved module cleanup model. We recently upgraded another major system from 1.5.2 to 2.0. It was a much larger undertaking; about 50 modules were affected. And six months after 2.0, we'll end up with yet another incompatible version... As a result, we end up with a lot more versions in active use, more support overhead, maintenance hell for extension writers (tried shipping a binary extension lately?), training headaches ("it works this way in 1.5.2 and 2.0 but this way in 2.1, but this works this way in 1.5.2 but this way in 2.0 and 2.1, and this works..."), and all our base are belong to cats. > 2.x versions can introduce new things. We'll do our best to keep > old code from breaking unnecessarily, but I don't want our success > to stand in the way of progress, and I will allow some things to > break occasionally if it serves a valid purpose. But nested scopes breaks everything: books (2.1 will appear at about the same time as the first batch of 2.0 books), training materials, gurus, tools, and as we've seen, some code. > I don't want to slide into version number inflation. There's not > enough new in 2.1 to call it 3.0. Besides nested scopes, that is. I'm just an FL, but I'd leave them out of a release that follows only 6 months after a major release, no matter what version number we're talking about. Leave the new compiler in, and use it to warn about import/exec (can it detect shadowing too?), but don't make the switch until everyone's ready. Cheers /F From nas at arctrix.com Thu Feb 22 16:14:37 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Thu, 22 Feb 2001 07:14:37 -0800 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102221351.IAA15568@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Feb 22, 2001 at 08:51:03AM -0500 References: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> <200102221351.IAA15568@cj20424-a.reston1.va.home.com> Message-ID: <20010222071437.A21075@glacier.fnational.com> On Thu, Feb 22, 2001 at 08:51:03AM -0500, Guido van Rossum wrote: > Hey, here's an idea. We could add the warning API to 2.0.1 (it's > backwards compatible AFAIK), and you can release PY201 with warnings > added for things that your customers will need to change before they > switch to PY21. This is a wonderful idea. Neil From thomas at xs4all.net Thu Feb 22 16:27:25 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Thu, 22 Feb 2001 16:27:25 +0100 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <200102221351.IAA15568@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Feb 22, 2001 at 08:51:03AM -0500 References: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> <200102221351.IAA15568@cj20424-a.reston1.va.home.com> Message-ID: <20010222162725.A7486@xs4all.nl> On Thu, Feb 22, 2001 at 08:51:03AM -0500, Guido van Rossum wrote: > Hey, here's an idea. We could add the warning API to 2.0.1 (it's > backwards compatible AFAIK), and you can release PY201 with warnings > added for things that your customers will need to change before they > switch to PY21. Definately +1 on that. While on the subject: will all of 'from module import *' be deprecated, even at module level ? How should code like Mailman's mm_cfg.py/Defaults.py construct be rewritten to provide similar functionality ? Much as I dislike 'from module import *', it really does have its uses. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From pedroni at inf.ethz.ch Thu Feb 22 17:57:44 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Thu, 22 Feb 2001 17:57:44 +0100 (MET) Subject: [Python-Dev] a doc bug Message-ID: <200102221657.RAA13265@core.inf.ethz.ch> I don't know if someone was still aware of this but the tutorial in the development version of the doc still refers to the old scoping rules and refers to the old hack trick: http://python.sourceforge.net/devel-docs/tut/node6.html#SECTION006740000000000000000 Something to fix, in the case. regards. From loewis at informatik.hu-berlin.de Thu Feb 22 18:57:49 2001 From: loewis at informatik.hu-berlin.de (Martin von Loewis) Date: Thu, 22 Feb 2001 18:57:49 +0100 (MET) Subject: [Python-Dev] compile leaks memory. lots of memory. Message-ID: <200102221757.SAA17087@pandora> > It would be helpful to get some analysis on this known problem > before the beta release. It looks like there is a leak of symtable entries. In particular, symtable_enter_scope has if (st->st_cur) { prev = st->st_cur; if (PyList_Append(st->st_stack, (PyObject *)st->st_cur) < 0) { Py_DECREF(st->st_cur); st->st_errors++; return; } } st->st_cur = (PySymtableEntryObject *)\ PySymtableEntry_New(st, name, type, lineno); if (strcmp(name, TOP) == 0) st->st_global = st->st_cur->ste_symbols; if (prev) if (PyList_Append(prev->ste_children, (PyObject *)st->st_cur) < 0) st->st_errors++; First, it seems that Py_XDECREF(prev); is missing. That alone does not fix the leak, though, since prev is always null in the test case. The real problem comes from st_cur never being released, AFAICT. There is a DECREF in symtable_exit_scope, but that function is not called in the test case - symtable_enter_scope is called. For symmetry reasons, it appears that there should be a call to symtable_exit_scope of the global scope somewhere (which apparently is build in symtable_build). I can't figure how what the correct place for that call would be, though. Regards, Martin From guido at digicool.com Thu Feb 22 21:46:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 15:46:03 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: Your message of "Thu, 22 Feb 2001 16:27:25 +0100." <20010222162725.A7486@xs4all.nl> References: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> <200102221351.IAA15568@cj20424-a.reston1.va.home.com> <20010222162725.A7486@xs4all.nl> Message-ID: <200102222046.PAA16702@cj20424-a.reston1.va.home.com> > On Thu, Feb 22, 2001 at 08:51:03AM -0500, Guido van Rossum wrote: > > > Hey, here's an idea. We could add the warning API to 2.0.1 (it's > > backwards compatible AFAIK), and you can release PY201 with warnings > > added for things that your customers will need to change before they > > switch to PY21. > > Definately +1 on that. Hold on. Jeremy has an announcement to make. But he's probably still struggling home -- about 3-4 inches of snow (so far) were dumped on the DC area this afternoon. > While on the subject: will all of 'from module import *' be deprecated, even > at module level ? No, not at the module level. (There it is only frowned upon. :-) > How should code like Mailman's mm_cfg.py/Defaults.py > construct be rewritten to provide similar functionality ? Much as I dislike > 'from module import *', it really does have its uses. I have no idea what mm_cfg.py/Defaults.py is, but yes, import * has its uses! --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Thu Feb 22 22:01:02 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 22 Feb 2001 16:01:02 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <029901c09cbb$a31cb980$e46940d5@hagrid> Message-ID: [tim] > BTW, are people similarly opposed to that comparisons can now raise > exceptions? It's been mentioned a few times on c.l.py this week, but > apparently not (yet) by people who bumped into it in practice. [/F] > but that's not a new thing in 2.1, is it? No, but each release raises cmp exceptions in cases it didn't the release before. If we were dead serious about "no backward incompatibility ever, no way no how", I'd expect arguments just as intense about that. So I conclude we're not dead serious about that. Which is good! But in a world without absolutes, there are no killer arguments either. From barry at digicool.com Thu Feb 22 22:24:32 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Thu, 22 Feb 2001 16:24:32 -0500 Subject: [Python-Dev] Those import related syntax errors again... References: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> <200102221351.IAA15568@cj20424-a.reston1.va.home.com> <20010222162725.A7486@xs4all.nl> <200102222046.PAA16702@cj20424-a.reston1.va.home.com> Message-ID: <14997.33680.580927.514329@anthem.wooz.org> >>>>> "GvR" == Guido van Rossum writes: >> How should code like Mailman's mm_cfg.py/Defaults.py construct >> be rewritten to provide similar functionality ? Much as I >> dislike 'from module import *', it really does have its uses. GvR> I have no idea what mm_cfg.py/Defaults.py is, but yes, import GvR> * has its uses! Not that it's really that important to the discussion, but the way Mailman lets users override its defaults is by putting all the (autoconf and hardcoded) system defaults in Defaults.py, which the user is never supposed to touch. Then mm_cfg.py does a "from Defaults import *" -- at module level of course -- and users put any overridden values in mm_cfg.py. All Mailman modules that have to reference a system default do so by importing and using mm_cfg. This was Ken's idea, and a darn good one! It's got a wart or two, but they are quite minor. -Barry From fredrik at pythonware.com Thu Feb 22 22:40:09 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 22 Feb 2001 22:40:09 +0100 Subject: [Python-Dev] Those import related syntax errors again... References: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> <200102221351.IAA15568@cj20424-a.reston1.va.home.com> <20010222162725.A7486@xs4all.nl> Message-ID: <070101c09d18$093c5a20$e46940d5@hagrid> Thomas wrote: > While on the subject: will all of 'from module import *' be deprecated, even > at module level ? hopefully not -- that would break tons of code, instead of just some... > How should code like Mailman's mm_cfg.py/Defaults.py construct be > rewritten to provide similar functionality ? Much as I dislike 'from module > import *', it really does have its uses. how about: # # mm_config.py class config: # defaults goes here spam = "spam" egg = "egg" # load user overrides import mm_cfg config.update(vars(mm_cfg)) # # some_module.py from mm_config import config print "breakfast:", config.spam, config.egg Cheers /F From tim.one at home.com Thu Feb 22 22:45:00 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 22 Feb 2001 16:45:00 -0500 Subject: [Python-Dev] Those import related syntax errors again... In-Reply-To: <029c01c09cbb$a3e7f8c0$e46940d5@hagrid> Message-ID: [/F] > If a debate doesn't lead anywhere, it's just a waste of time. If you end up being on the winning side, is it still a waste of time? If you end up being on the losing side of a debate, perhaps, sometimes. But I can't predict the future well enough to know the outcome in advance. > Donning my devil's advocate suite, here are some recent observations: > > - Important decisions are made on internal PythonLabs meetings > (unit testing, the scope issue, etc), not by an organized python- > dev process. Decisions are-- and were --made in Guido's head. Python-Dev was established to give him easier access to higher-quality input than was possible on c.l.py at the time, and to give Python developers a higher S/N place to hang out when discussing Python development. Internal PythonLabs meetings are really much the same, just on a smaller scale and with a higher-still S/N ratio. Both work for those purposes. It isn't-- and wasn't --the purpose of either to strip Guido of the last word. > Does anyone care about -1 and +1's anymore? Did anyone ever <0.5 wink>? A scattering of two-character arguments is interesting to get a quick feel, but even I wouldn't *decide* anything on that basis. If this were an ANSI/ISO committee, a single -1 would have absolute power -- and then we'd still be using Python 0.9.6 (ANSI/ISO committees need soul-crushingly boring and budget-bustingly expensive meetings regularly else consensus would never be reached on anything -- if people get to veto in their spare time while sitting at home, and without opponents blowing spit right in their face for the 18th time in 6 years, there's insufficient pressure *to* compromise). > - The PEP process isn't working ("I updated the PEP and checked > in the code", "but *that* PEP doesn't apply to *me*", etc). Need to define "working". I don't think it's what it should be yet, but is making progress. > - Impressive hacks are more important than concerns from people > who make their living selling Python technology (rather than a > specific application). Codewise, nested scopes are amazing. > From a marketing perspective, it's a disaster. Any marketing droid would believe that Python's current market is a fraction of its potential market, and so welcome any "new feature" that makes new sales easier. c.l.py is a microcosm of this battlefield, and the cry for nested scopes has continued unabated since the day lambda was introduced. I've never met a marketing type (and I've met more than my share ...) who wouldn't seize this as an opportunity to *expand* market share. Sales droids servicing existing accounts *may* grumble -- or the more inventive may take it as an opportunity to drive home the importance of their relationship to their customers ("it's us against them, and boy aren't you glad you've got Amalgamated Pythonistries on your side!"). > (even more absurd allegations snipped) With gratitude, and I'll skip even more absurd rationalizations . > Am I entirely wrong? Of course not. The world isn't that simple. indeed-the-world-is-heavily-nested -ly y'rs - tim PS: At the internal PythonLabs mtg today, I voted against nested scopes. But also for them. Leaving that to Jeremy to explain. From greg at cosc.canterbury.ac.nz Fri Feb 23 00:21:58 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Fri, 23 Feb 2001 12:21:58 +1300 (NZDT) Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <200102220145.UAA12690@cj20424-a.reston1.va.home.com> Message-ID: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> Guido: > Language theorists love [exec]. Really? I'd have thought language theorists would be the ones who hate it, given all the problems it causes... Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From guido at digicool.com Fri Feb 23 00:26:05 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 18:26:05 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: Your message of "Fri, 23 Feb 2001 12:21:58 +1300." <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> References: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> Message-ID: <200102222326.SAA18443@cj20424-a.reston1.va.home.com> > Guido: > > > Language theorists love [exec]. > > Really? I'd have thought language theorists would be the ones > who hate it, given all the problems it causes... Depends on where they're coming from. Or maybe I should have said Lisp folks... --Guido van Rossum (home page: http://www.python.org/~guido/) From esr at thyrsus.com Fri Feb 23 01:14:50 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Thu, 22 Feb 2001 19:14:50 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <200102222326.SAA18443@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Feb 22, 2001 at 06:26:05PM -0500 References: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> <200102222326.SAA18443@cj20424-a.reston1.va.home.com> Message-ID: <20010222191450.B15506@thyrsus.com> Guido van Rossum : > > > Language theorists love [exec]. > > > > Really? I'd have thought language theorists would be the ones > > who hate it, given all the problems it causes... > > Depends on where they're coming from. Or maybe I should have said > Lisp folks... You are *so* right, Guido! :-) I almost commented about this in reply to Greg's post earlier. Crusty old LISP hackers like me tend to be really attached to being able to (a) lash up S-expressions that happen to be LISP function calls on the fly, and then (b) hand them to eval. "No separation between code and data" is one of the central dogmas of our old-time religion. In languages like Python that are sufficiently benighted to have a distinction between expression and statement syntax, we demand exec as well as eval and are likely to get seriously snotty about the language's completeness if exec is missing. Awkwardly, in such languages exec turns out to be much less useful in practice than it is in theory. In fact, Python has rather forced me to question whether "No separation between code and data" was as important a component of LISP's supernal wonderfulness as I believed when I was a fully fervent member of the cult. Anonymous lambdas are still key, though. ;-) And much cooler now that we have real lexical scoping. -- Eric S. Raymond I cannot undertake to lay my finger on that article of the Constitution which grant[s] a right to Congress of expending, on objects of benevolence, the money of their constituents. -- James Madison, 1794 From ping at lfw.org Fri Feb 23 03:37:05 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 22 Feb 2001 18:37:05 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? Message-ID: Hi all -- i've been reading the enormous thread on nested scopes with some concern, since i would very much like Python to support "proper" lexical scoping, yet i also care about compatibility. There is something missing from my understanding here: - The model is, each environment has a pointer to the enclosing environment, right? - Whenever you can't find what you're looking for, you go up to the next level and keep looking, right? - So what's the issue with not being able to determine which variable binds in which scope? With the model just described, it's perfectly clear. Is all this breakage only caused by the particular optimizations for lookup in the implementation (fast locals, etc.)? Or have i missed something obvious? I could probably go examine the source code of the nested scoping changes to find the answer to my own question, but in case others share this confusion with me, i thought it would be worth asking. * * * Consider for a moment the following simple model of lookup: 1. A scope maps names to objects. 2. Each scope except the topmost also points to a parent scope. 3. To look up a name, first ask the current scope. 4. When lookup fails, go up to the parent scope and keep looking. I believe the above rules are common among many languages and are commonly understood. The only Python-specific parts are then: 5. The current scope is determined by the nearest enclosing 'def'. 6. These statements put a binding into the current scope: assignment (=), def, class, for, except, import And that's all. * * * Given this model, all of the scoping questions that have been raised have completely clear answers: Example I >>> y = 3 >>> def f(): ... print y ... >>> f() 3 Example II >>> y = 3 >>> def f(): ... print y ... y = 1 ... print y ... >>> f() 3 1 >>> y 3 Example III >>> y = 3 >>> def f(): ... exec "y = 2" ... def g(): ... return y ... return g() ... >>> f() 2 Example IV >>> m = open('foo.py', 'w') >>> m.write('x = 1') >>> m.close() >>> def f(): ... x = 3 ... from foo import * ... def g(): ... print x ... g() ... >>> f() 1 In Example II, the model addresses even the current situation that sometimes surprises new users of Python. Examples III and IV are the current issues of contention about nested scopes. * * * It's good to start with a simple model for the user to understand; the implementation can then do funky optimizations under the covers so long as the model is preserved. So for example, if the compiler sees that there is no "import *" or "exec" in a particular scope it can short-circuit the lookup of local variables using fast locals. But the ability of the compiler to make this optimization should only affect performance, not affect the Python language model. The model described above is the approximately the one available in Scheme. It exactly reflects the environment-diagram model of scoping as taught to most Scheme students and i would argue that it is the easiest to explain. Some implementations of Scheme, such as STk, do what is described above. UMB scheme does what Python does now: the use-before-binding of 'y' in Example II would cause an error. I was surprised that these gave different behaviours; it turns out that the Scheme standard actually forbids the use of internal defines not at the beginning of a function body, thus sidestepping the issue. But we can't do this in Python; assignment must be allowed anywhere. Given that internal assignment has to have some meaning, the above meaning makes the most sense to me. -- ?!ng From guido at digicool.com Fri Feb 23 03:59:26 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 21:59:26 -0500 Subject: [Python-Dev] Nested scopes resolution -- you can breathe again! In-Reply-To: Your message of "Thu, 22 Feb 2001 16:45:00 EST." References: Message-ID: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> We (PythonLabs) have received a lot of flak over our plan to introduce nested scopes despite the fact that it appears to break a small but significant amount of working code. We discussed this at an PythonLabs group meeting today. After the meeting, Tim posted this teaser: > PS: At the internal PythonLabs mtg today, I voted against nested > scopes. But also for them. Leaving that to Jeremy to explain. After the meeting Jeremy had a four hour commute home due to bad weather, so let me do the honors for him. (Jeremy will update the PEP, implement the feature, and update the documentation, in that order.) We have clearly underestimated how much code the nested scopes would break, but more importantly we have underestimated how much value our community places on stability. At the same time we really like nested scopes, and we would like to see the feature introduced at some point. So here's the deal: we'll make nested scopes an optional feature in 2.1, default off, selectable on a per-module basis using a mechanism that's slightly hackish but is guaranteed to be safe. (See below.) At the same time, we'll augment the compiler to detect all situations that will break when nested scopes are introduced in the future, and issue warnings for those situations. The idea here is that warnings don't break code, but encourage folks to fix their code so we can introduce nested scopes in 2.2. Given our current pace of releases that should be about 6 months warning. These warnings are *not* optional -- they are issued regardless of whether you select to use nested scopes. However there is a command line option (crudest form: -Wi) to disable warnings; there are also ways to disable them programmatically. If you want to make sure that you don't ignore the warnings, there's also a way to turn warnings into errors (-We from the command line). How do you select nested scopes? Tim suggested a mechanism that is used by the ANSI C committee to enable language features that are backwards incompatible: they trigger on the import of a specific previously non-existant header file. (E.g. after #include , "imaginary" becomes a reserved word.) The Python equivalent of this is a magical import that is recognized by the compiler; this was also proposed by David Scherer for making integer division yield a float. (See http://mail.python.org/pipermail/edu-sig/2000-May/000499.html) You could say that Perl's "use" statement is similar. We haven't decided yet which magical import; two proposals are: import __nested_scopes__ from __future__ import nested_scopes The magical import only affects the source file in which it occurs. It is recognized by the compiler as it is scanning the source code. It must appear at the top-level (no "if" or "try" or "def" or anything else around it) and before any code that could be affected. We realize that PEP 5 specifies a one-year transition period. We believe that that is excessive in this case, and would like to change the PEP to be more flexible. (The PEP has questionable status -- it was never formally discussed.) We also believe that the magical import mechanism is useful enough to be reused for other situations like this; Tim will draft a PEP to describe in excruciating detail. I thank everybody who gave feedback on this issue. And thanks to Jeremy for implementing nested scopes! --Guido van Rossum (home page: http://www.python.org/~guido/) From ping at lfw.org Fri Feb 23 04:16:57 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Thu, 22 Feb 2001 19:16:57 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: Message-ID: On Thu, 22 Feb 2001, Ka-Ping Yee wrote: > - So what's the issue with not being able to determine > which variable binds in which scope? With the model > just described, it's perfectly clear. Is all this > breakage only caused by the particular optimizations > for lookup in the implementation (fast locals, etc.)? > Or have i missed something obvious? That was poorly phrased. To clarify, i am making the assumption that the compiler wants each name to be associated with exactly one scope per block in which it appears. 1. Is the assumption true? 2. If so, is this constraint motivated only by lookup optimization? 3. Why enforce this constraint when it would be inconsistent with behaviour that we already have at the top level? If foo.py contains "x = 1", then this works at the top level: >>> if 1: # top level ... x = 3 ... print x ... from foo import * ... def g(): print x ... g() ... 3 1 I am suggesting that it should do exactly the same thing in a function: >>> def f(): # x = 3 inside, no g() ... x = 3 ... print x ... from foo import * ... print x ... >>> f() 3 1 >>> def f(): # x = 3 inside, nested g() ... x = 3 ... print x ... from foo import * ... def g(): print x ... g() ... >>> f() 3 1 >>> x = 3 >>> def f(): # x = 3 outside, nested g() ... print x ... from foo import * ... def g(): print x ... g() ... >>> f() 3 1 (Replacing "from foo import *" above with "x = 1" or "exec('x = 1')" should make no difference. So this isn't just about internal-import-* and exec-without-in, even if we do eventually deprecate internal-import-* and exec-without-in -- which i would tend to support.) Here is a summary of the behaviour i observe and propose. 1.5.2 2.1a1 suggested top level from foo import * 3,1 3,1 3,1 exec('x = 1') 3,1 3,1 3,1 x = 1 3,1 3,1 3,1 x = 3 outside, no g() from foo import * 3,1 3,1 3,1 exec('x = 1') 3,1 3,1 3,1 x = 1 x UnboundLocal 3,1 x = 3 inside, no g() from foo import * 3,1 3,1 3,1 exec('x = 1') 3,1 3,1 3,1 x = 1 x UnboundLocal 3,1 x = 3 outside, nested g() from foo import * 3,3 SyntaxError 3,1 exec('x = 1') 3,3 SyntaxError 3,1 x = 1 x UnboundLocal 3,1 x = 3 inside, nested g() from foo import * 3,x SyntaxError 3,1 exec('x = 1') 3,x SyntaxError 3,1 x = 1 3,x 3,1 3,1 (I don't know what the heck is going on in Python 1.5.2 in the cases where it prints 'x'.) My postulates are: 1. "exec('x = 1')" should behave exactly the same as "x = 1" 2. "from foo import *" should do the same as "x = 1" 3. "def g(): print x" should behave the same as "print x" The testing script is attached. -- ?!ng -------------- next part -------------- import sys file = open('foo.py', 'w') file.write('x = 1') file.close() toplevel = """ x = 3 print x %s def g(): print x g() """ outside = """ x = 3 def f(): print x %s print x f() """ inside = """ x = 3 def f(): print x %s print x f() """ nestedoutside = """ x = 3 def f(): print x %s def g(): print x g() f() """ nestedinside = """ def f(): x = 3 print x %s def g(): print x g() f() """ for template in [toplevel, outside, inside, nestedoutside, nestedinside]: for statement in ["from foo import *", "exec('x = 1')", "x = 1"]: code = template % statement try: exec code in {} except: print sys.exc_value print From tim.one at home.com Fri Feb 23 04:22:54 2001 From: tim.one at home.com (Tim Peters) Date: Thu, 22 Feb 2001 22:22:54 -0500 Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: Message-ID: [Ka-Ping Yee] > Hi all -- i've been reading the enormous thread on nested scopes > with some concern, since i would very much like Python to support > "proper" lexical scoping, yet i also care about compatibility. > > There is something missing from my understanding here: > > - The model is, each environment has a pointer to the > enclosing environment, right? The conceptual model, yes, but the implementation isn't like that. > - Whenever you can't find what you're looking for, you > go up to the next level and keep looking, right? Conceptually, yes. No such looping search occurs at runtime, though. > - So what's the issue with not being able to determine > which variable binds in which scope? That determination is done at compile-time, not runtime. In the presence of "exec" and "import *" in some contexts, compile-time determination is stymied and there is no runtime support for a "slow" lookup. Note that the restrictions are *not* against lexical nesting, they're against particular uses of "exec" and "import *" (the latter of which is so muddy the Ref Man said it was undefined a long, long time ago). > ... > It's good to start with a simple model for the user to understand; > the implementation can then do funky optimizations under the covers > so long as the model is preserved. Even locals used to be resolved by dict searches. The entire model there wasn't preserved by the old switch to fast locals either. For example, >>> def f(): ... global i ... exec "i=42\n" ... >>> i = 666 >>> f() >>> i 666 >>> IIRC, in the old days that would print 42. Who cares <0.1 wink>? This is nonsense either way. There are tradeoffs here among: conceptual clarity runtime efficiency implementation complexity rate of cyclic garbage creation Your message favors "conceptual clarity" over all else; the implementation doesn't. Python also limits strings to the size of a platform int <0.9 wink>. > ... > The model described above is the approximately the one available in > Scheme. But note that eval() didn't make it into the Scheme std: they couldn't agree on its semantics or implementation. eval() is *suggested* in the fifth Revised Report, but there has no access to its lexical environment; instead it acts "as if" its argument had appeared at top level "or in some other implementation-dependent environment" (Dybvig; "The Scheme Programming Language"). Dybvig gives an example of one of the competing Scheme eval() proposals gaining access to a local vrbl via using macros to interpolate the local's value into the argument's body before calling eval(). And that's where refusing to compromise leads. utterly-correct-and-virtually-useless-ly y'rs - tim From guido at digicool.com Fri Feb 23 04:31:36 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 22:31:36 -0500 Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: Your message of "Thu, 22 Feb 2001 18:37:05 PST." References: Message-ID: <200102230331.WAA21467@cj20424-a.reston1.va.home.com> > Hi all -- i've been reading the enormous thread on nested scopes > with some concern, since i would very much like Python to support > "proper" lexical scoping, yet i also care about compatibility. Note that this is moot now -- see my previous post about how we've decided to resolve this using a magical import to enable nested scopes (in 2.1). > There is something missing from my understanding here: > > - The model is, each environment has a pointer to the > enclosing environment, right? Actually, no. > - Whenever you can't find what you're looking for, you > go up to the next level and keep looking, right? That depends. Our model is inspired by the semantics of locals in Python 2.0 and before, and this all happens at compile time. That means that we must be able to know which names are defined in each scope at compile time. > - So what's the issue with not being able to determine > which variable binds in which scope? With the model > just described, it's perfectly clear. Is all this > breakage only caused by the particular optimizations > for lookup in the implementation (fast locals, etc.)? > Or have i missed something obvious? You call it an optimization, and that's how it started. But since it clearly affects the semantics of the language, it's not really an optimization -- it's a particular semantics that lends itself to more and easy compile-time analysis and hence can be implemented more efficiently, but the corner cases are different, and the language semantics define what should happen, optimization or not. In particular: x = 1 def f(): print x x = 2 raises an UnboundLocalError error at the point of the print statement. Likewise, in the official semantics of nested scopes: x = 1 def f(): def g(): print x g() x = 2 also raises an UnboundLocalError at the print statement. > I could probably go examine the source code of the nested scoping > changes to find the answer to my own question, but in case others > share this confusion with me, i thought it would be worth asking. No need to go to the source -- this is all clearly explained in the PEP (http://python.sourceforge.net/peps/pep-0227.html). > * * * > > Consider for a moment the following simple model of lookup: > > 1. A scope maps names to objects. > > 2. Each scope except the topmost also points to a parent scope. > > 3. To look up a name, first ask the current scope. > > 4. When lookup fails, go up to the parent scope and keep looking. > > I believe the above rules are common among many languages and are > commonly understood. Actually, most languages do all this at compile time. Very early Python versions did do all this at run time, but by the time 1.0 was released, the "locals are locals" rule was firmly in place. You may like the purely dynamic version better, but it's been outlawed long ago. > The only Python-specific parts are then: > > 5. The current scope is determined by the nearest enclosing 'def'. For most purposes, 'class' also creates a scope. > 6. These statements put a binding into the current scope: > assignment (=), def, class, for, except, import > > And that's all. Sure. > * * * > > Given this model, all of the scoping questions that have been > raised have completely clear answers: > > Example I > > >>> y = 3 > >>> def f(): > ... print y > ... > >>> f() > 3 Sure. > Example II > > >>> y = 3 > >>> def f(): > ... print y > ... y = 1 > ... print y > ... > >>> f() > 3 > 1 > >>> y > 3 You didn't try this, did you? or do you intend to say that it "should" print this? In fact it raises UnboundLocalError: local variable 'y' referenced before assignment. (Before 2.0 it would raise NameError.) > Example III > > >>> y = 3 > >>> def f(): > ... exec "y = 2" > ... def g(): > ... return y > ... return g() > ... > >>> f() > 2 Wrong again. This prints 3, both without and with nested scopes as defined in 2.1a2. However it raises an exception with the current CVS version: SyntaxError: f: exec or 'import *' makes names ambiguous in nested scope. > Example IV > > >>> m = open('foo.py', 'w') > >>> m.write('x = 1') > >>> m.close() > >>> def f(): > ... x = 3 > ... from foo import * > ... def g(): > ... print x > ... g() > ... > >>> f() > 1 I didn't try this one, but I'm sure that it prints 3 in 2.1a1 and raises the same SyntaxError as above with the current CVS version. > In Example II, the model addresses even the current situation > that sometimes surprises new users of Python. Examples III and IV > are the current issues of contention about nested scopes. > > * * * > > It's good to start with a simple model for the user to understand; > the implementation can then do funky optimizations under the covers > so long as the model is preserved. So for example, if the compiler > sees that there is no "import *" or "exec" in a particular scope it > can short-circuit the lookup of local variables using fast locals. > But the ability of the compiler to make this optimization should only > affect performance, not affect the Python language model. Too late. The semantics have been bent since 1.0 or before. The flow analysis needed to optimize this in such a way that the user can't tell whether this is optimized or not is too hard for the current compiler. The fully dynamic model also allows the user to play all sorts of stupid tricks. And the unoptimized code is so much slower that it's well worth to hve the optimization. > The model described above is the approximately the one available in > Scheme. It exactly reflects the environment-diagram model of scoping > as taught to most Scheme students and i would argue that it is the > easiest to explain. I don't know Scheme, but isn't it supposed to be a compiled language? > Some implementations of Scheme, such as STk, do what is described > above. UMB scheme does what Python does now: the use-before-binding > of 'y' in Example II would cause an error. I was surprised that > these gave different behaviours; it turns out that the Scheme > standard actually forbids the use of internal defines not at the > beginning of a function body, thus sidestepping the issue. I'm not sure how you can say that Scheme sidesteps the issue when you just quote an example where Scheme implementations differ? > But we > can't do this in Python; assignment must be allowed anywhere. > > Given that internal assignment has to have some meaning, the above > meaning makes the most sense to me. Sorry. Sometimes, reality bites. :-) Note that I want to take more of the dynamicism out of function bodies. The reference manual has for a long time outlawed import * inside functions (but the implementation didn't enforce this). I see no good reason to allow this (it's causing a lot of work to happen each time the function is called), and the needs of being able to clearly define what happens with nested scopes make it necessary to outlaw it. I also want to eventually completely outlaw exec without an 'in' clause inside a class or function, and access to local variables through locals() or vars(). I'm not sure yet about exec without an 'in' clause at the global level, but I'm tempted to think that even there it's not much use. We'll start with warnings for some of these cases in 2.1. I see that Tim posted another rebuttal, explaining better than I do here *why* Ping's "simple" model is not good for Python, so I'll stop now. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Fri Feb 23 04:36:08 2001 From: guido at digicool.com (Guido van Rossum) Date: Thu, 22 Feb 2001 22:36:08 -0500 Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: Your message of "Thu, 22 Feb 2001 19:16:57 PST." References: Message-ID: <200102230336.WAA21493@cj20424-a.reston1.va.home.com> > 1. "exec('x = 1')" should behave exactly the same as "x = 1" Sorry, no go. This just isn't a useful feature. > 2. "from foo import *" should do the same as "x = 1" But it is limiting because it hides information from the compiler, and hence it is outlawed when it gets in the way of the compiler. > 3. "def g(): print x" should behave the same as "print x" Huh? again. Defining a function does't call it. Python has always adhered to the principle that the context where a function is defined determines its context, not where it is called. --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Fri Feb 23 04:00:07 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 22 Feb 2001 22:00:07 -0500 (EST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: References: Message-ID: <14997.53815.769191.239591@w221.z064000254.bwi-md.dsl.cnc.net> I think the issue that you didn't address is that lexical scoping is a compile-time issue, and that in most languages that variable names that a program uses are a static property of the code. Off the top of my head, I can't think of another lexically scoped language that allows an exec or eval to create a new variable binding that can later be used via a plain-old reference. One of the reasons I am strongly in favor of making import * and exec errors is that it stymies the efforts of a reader to understand the code. Lexical scoping is fairly clear because you can figure out what binding a reference will use by reading the text. (As opposed to dynamic scoping where you have to think about all the possible call stacks in which the function might end up.) With bare exec and import *, the reader of the code doesn't have any obvious indicator of what names are being bound. This is why I consider it bad form and presumably part of the reason that the language references outlaws it. (But not at the module scope, since practicality beats purity.) If we look at your examples: >>> def f(): # x = 3 inside, no g() ... x = 3 ... print x ... from foo import * ... print x ... >>> f() 3 1 >>> def f(): # x = 3 inside, nested g() ... x = 3 ... print x ... from foo import * ... def g(): print x ... g() ... >>> f() 3 1 >>> x = 3 >>> def f(): # x = 3 outside, nested g() ... print x ... from foo import * ... def g(): print x ... g() ... >>> f() 3 1 In these examples, it isn't at all obvious to the reader of the code whether the module foo contains a binding for x or whether the programmer intended to import that name and stomp on the exist definition. Another key difference between Scheme and Python is that in Scheme, each binding operation creates a new scope. The Scheme equivalent of this Python code -- def f(x): y = x + 1 ... y = x + 2 ... -- would presumably be something like this -- (define (f x) (let ((y (+ x 1))) ... (let (y (+ x 2))) ... )) Python is a whole different beast because it supports multiple assignments to a name within a single scope. In Scheme, every binding of a name via lambda introduces a new scope. This is the reason that the example -- x = 3 def f(): print x x = 2 print x -- raises an error rather than printing '3\n2\n'. Jeremy From jeremy at alum.mit.edu Fri Feb 23 04:15:39 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Thu, 22 Feb 2001 22:15:39 -0500 (EST) Subject: [Python-Dev] Nested scopes resolution -- you can breathe again! In-Reply-To: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> References: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> Message-ID: <14997.54747.56767.641188@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum writes: GvR> The Python equivalent of this is a magical import that is GvR> recognized by the compiler; this was also proposed by David GvR> Scherer for making integer division yield a float. (See GvR> http://mail.python.org/pipermail/edu-sig/2000-May/000499.html) GvR> You could say that Perl's "use" statement is similar. GvR> We haven't decided yet which magical import; two proposals are: GvR> import __nested_scopes__ from __future__ import GvR> nested_scopes GvR> The magical import only affects the source file in which it GvR> occurs. It is recognized by the compiler as it is scanning the GvR> source code. It must appear at the top-level (no "if" or "try" GvR> or "def" or anything else around it) and before any code that GvR> could be affected. We'll need to write a short PEP describing this approach and offering some guidance about how frequently we intend to use it. I think few of us would be interested in making frequent use of it to add all sorts of variant language features. Rather, I imagine it would be used only -- or primarily -- to introduce new features that will become standard at some point. GvR> We also believe that the magical import mechanism is useful GvR> enough to be reused for other situations like this; Tim will GvR> draft a PEP to describe in excruciating detail. I'm happy to hear that Tim will draft this PEP. He didn't mention it at lunch today or I would have given him a big hug (or bought him a Coke). As Tim knows, I think the PEP needs to say something about whether these magic imports create name bindings and what objects are bound to the names. Will we need an __nested_scopes__.py in the Lib directory? Jeremy From barry at digicool.com Fri Feb 23 06:04:32 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 23 Feb 2001 00:04:32 -0500 Subject: [Python-Dev] compile leaks memory. lots of memory. References: <200102221757.SAA17087@pandora> Message-ID: <14997.61280.57003.582965@anthem.wooz.org> >>>>> "MvL" == Martin von Loewis writes: MvL> The real problem comes from st_cur never being released, MvL> AFAICT. There is a DECREF in symtable_exit_scope, but that MvL> function is not called in the test case - MvL> symtable_enter_scope is called. For symmetry reasons, it MvL> appears that there should be a call to symtable_exit_scope of MvL> the global scope somewhere (which apparently is build in MvL> symtable_build). I can't figure how what the correct place MvL> for that call would be, though. Martin, I believe you've hit the nail on the head. My latest Insure run backs this theory up. It even claims that st_cur is lost by the de-allocation of st in PySymtable_Free(). I'm betting that Jeremy will be able to quickly figure out where the missing frees are when I send him the Insure report. -Barry From tim.one at home.com Fri Feb 23 06:30:27 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 00:30:27 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <14997.54747.56767.641188@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: [Guido] > We also believe that the magical import mechanism is useful > enough to be reused for other situations like this; Tim will > draft a PEP to describe in excruciating detail. [Jeremy Hylton] > ... > I'm happy to hear that Tim will draft this PEP. He didn't mention it > at lunch today or I would have given him a big hug (or bought him a > Coke). Guido's msg was the first I heard of it too. I think this is the same process by which I got assigned to change Windows imports: the issue came up, and I opened my mouth <-0.9 wink>. > As Tim knows, I think the PEP needs to say something about whether > these magic imports create name bindings and what objects are > bound to the names. > > Will we need an __nested_scopes__.py in the Lib directory? Offhand, I suggest to create a real Lib/__future__.py, and let import code get generated as always. The purpose of __future__.py is to record release info in an *obvious* place to look for it (BTW, best I can tell, sys.version isn't documented anywhere, so this serves that purpose too ): ------------------------------------------------------------------ """__future__: Record of phased-in incompatible language changes. Each line is of the form: FeatureName = ReleaseInfo ReleaseInfo is a pair of the form: (OptionalRelease, MandatoryRelease) where, normally, OptionalRelease <= MandatoryRelease, and both are 5-tuples of the same form as sys.version_info: (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int PY_MINOR_VERSION, # the 1; an int PY_MICRO_VERSION, # the 0; an int PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string PY_RELEASE_SERIAL # the 3; an int ) In the case of MandatoryReleases that have not yet occurred, MandatoryRelease predicts the release in which the feature will become a permanent part of the language. Else MandatoryRelease records when the feature became a permanent part of the language; in releases at or after that, modules no longer need from __future__ import FeatureName to use the feature in question, but may continue to use such imports. In releases before OptionalRelease, an import from __future__ of FeatureName will raise an exception. MandatoryRelease may also be None, meaning that a planned feature got dropped. No line is ever to be deleted from this file. """ nested_scopes = (2, 1, 0, "beta", 1), (2, 2, 0, "final", 0) ----------------------------------------------------------------- While this is 100% intended to serve a documentation purpose, I also intend to use it in my own code, like so (none of which is special to the compiler except for the first line): from __future__ import nested_scopes import sys assert sys.version_info < nested_scopes[1], "delete this section!" # Note that the assert above also triggers if MandatoryRelease is None, # i.e. if the feature got dropped (under 2.1 rules, None is smaller than # anything else ). del sys, nested_scopes Other rules: # Legal only at module scope, before all non-comment occurrences of # name, and only when name is known to the compiler. from __future__ import name # Ditto. name2 has no special meaning. from __future__ import name as name2 The purpose of the next two is to allow programmatic manipulation of the info in __future__.py (generate help msgs, build a histogram of adoption dates for incompatible changes by decade over the previous two centuries, whatever). # Legal anywhere, but no special meaning. import __future__ import __future__ as name From tim.one at home.com Fri Feb 23 06:34:19 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 00:34:19 -0500 Subject: [Python-Dev] Nested scopes resolution -- you can breathe again! In-Reply-To: <14997.54747.56767.641188@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: [Jeremy] > ... > I think few of us would be interested in making frequent use of it > to add all sorts of variant language features. Rather, I imagine > it would be used only -- or primarily -- to introduce new features > that will become standard at some point. In my view, __future__ is *only* for the latter. Somebody who wants to write a PEP for an analogous scheme keying off, say, __jerking_off__, is welcome to do so, but anything else would be a 2.2 PEP at best. from-__jerking_off__-import-curly_braces-ly y'rs - tim From tim.one at home.com Fri Feb 23 06:37:32 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 00:37:32 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: Message-ID: [TIm] >(BTW, best I can tell, sys.version isn't documented anywhere, so > this serves that purpose too ). Wow. Averaging two errors per line! I meant sys.version_info, and it's documented in the obvious place. error-free-at-laat!-ly y'rs - itm From pf at artcom-gmbh.de Fri Feb 23 08:27:28 2001 From: pf at artcom-gmbh.de (Peter Funk) Date: Fri, 23 Feb 2001 08:27:28 +0100 (MET) Subject: [Python-Dev] Please use __progress__ instead of __future__ (was Other situations like this) In-Reply-To: from Tim Peters at "Feb 23, 2001 0:30:27 am" Message-ID: Hi, Tim Peters: [...] > Offhand, I suggest to create a real Lib/__future__.py, and let import code > get generated as always. The purpose of __future__.py is to record release > info in an *obvious* place to look for it [...] I believe __future__ is a bad name. What appears today as the bright shining future will be the distant dusty past of tomorrow. But the name of the module is not going to change anytime soon. right? Please call it __progress__ or __history__ or even __python_history__ or come up with some other name. What about __python_bloat__ ? . In my experience of computing it is a really bad idea to call anything 'new', 'old', 'future', '2000' or some such because those names last much longer than you would have believed at the time the name was choosen. Regards, Peter -- Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260 office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen) From tim.one at home.com Fri Feb 23 09:24:48 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 03:24:48 -0500 Subject: [Python-Dev] RE: Please use __progress__ instead of __future__ (was Other situations like this) In-Reply-To: Message-ID: [Peter Funk] > I believe __future__ is a bad name. What appears today as the bright > shining future will be the distant dusty past of tomorrow. But the > name of the module is not going to change anytime soon. right? The name of what module? Any statement of the form from __future__ import shiny becomes unnecessary as soon as shiny's future arrives, at which point the statement can be removed. The statement is necessary only so long as shiny *is* in the future. So the name is thoroughly appropriate. > Please call it __progress__ or __history__ or even __python_history__ > or come up with some other name. Sorry, but none of those make any sense given the intended use. It's not a part of Python 2.1 "history" that nested scopes won't be the default before 2.2! > What about __python_bloat__ ? > . *That* one makes some sense. > In my experience of computing it is a really bad idea to call anything > 'new', 'old', 'future', '2000' or some such because those names last much > longer than you would have believed at the time the name was choosen. The purpose of __future__ is to supply a means to try out future incompatible extensions before they become the default. The set of future extensions will change from release to release, but that they *are* in the future remains invariant even if Python goes on until universal heat death. Given the rules I already posted, it will be very easy to write a Python tool to identify obsolete __future__ imports and remove them (if you want). From mikael at isy.liu.se Fri Feb 23 10:41:12 2001 From: mikael at isy.liu.se (Mikael Olofsson) Date: Fri, 23 Feb 2001 10:41:12 +0100 (MET) Subject: [Python-Dev] RE: Nested scopes resolution -- you can breathe again! In-Reply-To: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> Message-ID: On 23-Feb-01 Guido van Rossum wrote: > from __future__ import nested_scopes There really is a time machine. So I guess I can get the full Python 3k functionality by doing from __future__ import * /Mikael ----------------------------------------------------------------------- E-Mail: Mikael Olofsson WWW: http://www.dtr.isy.liu.se/dtr/staff/mikael Phone: +46 - (0)13 - 28 1343 Telefax: +46 - (0)13 - 28 1339 Date: 23-Feb-01 Time: 10:39:52 /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ This message was sent by XF-Mail. ----------------------------------------------------------------------- From moshez at zadka.site.co.il Fri Feb 23 10:52:45 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Fri, 23 Feb 2001 11:52:45 +0200 (IST) Subject: [Python-Dev] RE: Nested scopes resolution -- you can breathe again! In-Reply-To: References: Message-ID: <20010223095245.A69E2A840@darjeeling.zadka.site.co.il> On Fri, 23 Feb 2001, Mikael Olofsson wrote: > There really is a time machine. So I guess I can get the full Python 3k > functionality by doing > > from __future__ import * In Py3K from import * will be illegal, so this will unfortunately blow up the minute the "import_star_bad" is imported. You'll just have to try them one by one... -- "I'll be ex-DPL soon anyway so I'm |LUKE: Is Perl better than Python? looking for someplace else to grab power."|YODA: No...no... no. Quicker, -- Wichert Akkerman (on debian-private)| easier, more seductive. For public key, finger moshez at debian.org |http://www.{python,debian,gnu}.org From mikael at isy.liu.se Fri Feb 23 11:21:06 2001 From: mikael at isy.liu.se (Mikael Olofsson) Date: Fri, 23 Feb 2001 11:21:06 +0100 (MET) Subject: [Python-Dev] RE: Nested scopes resolution -- you can breathe In-Reply-To: <01c301c5198d$c6bcc3f0$0900a8c0@SPIFF> Message-ID: On 23-Feb-05 Fredrik Lundh wrote: > Mikael Olofsson wrote: > > from __future__ import * > > I wouldn't do that: it imports both "warnings_are_errors" and > "from_import_star_is_evil", and we've found that it's impossible > to catch ParadoxErrors in a platform independent way. Naturally. More seriously though, I like from __future__ import something as an idiom. It gives us a clear reusable syntax to incorporate new features before they are included in the standard distribution. It is not obvious to me that the proposed alternative import __something__ is a way to incorporate something new. Perhaps Py3k should allow from __past__ import something to give us a way to keep some functionality from 2.* that has been (will be) changed in Py3k. explicit-is-better-than-implicit-ly y'rs /Mikael ----------------------------------------------------------------------- E-Mail: Mikael Olofsson WWW: http://www.dtr.isy.liu.se/dtr/staff/mikael Phone: +46 - (0)13 - 28 1343 Telefax: +46 - (0)13 - 28 1339 Date: 23-Feb-01 Time: 11:07:11 /"\ \ / ASCII Ribbon Campaign X Against HTML Mail / \ This message was sent by XF-Mail. ----------------------------------------------------------------------- From guido at digicool.com Fri Feb 23 13:28:17 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 23 Feb 2001 07:28:17 -0500 Subject: [Python-Dev] Re: Other situations like this In-Reply-To: Your message of "Fri, 23 Feb 2001 00:30:27 EST." References: Message-ID: <200102231228.HAA23466@cj20424-a.reston1.va.home.com> > [Guido] > > We also believe that the magical import mechanism is useful > > enough to be reused for other situations like this; Tim will > > draft a PEP to describe in excruciating detail. > > [Jeremy Hylton] > > ... > > I'm happy to hear that Tim will draft this PEP. He didn't mention it > > at lunch today or I would have given him a big hug (or bought him a > > Coke). > > Guido's msg was the first I heard of it too. I think this is the same > process by which I got assigned to change Windows imports: the issue came > up, and I opened my mouth <-0.9 wink>. Oops. I swear I heard you offer to write it. I guess all you said was that it should be written. Oh well. Somebody will write it. :-) Looks like Tim's proposed __future__.py is in good shape already. --Guido van Rossum (home page: http://www.python.org/~guido/) From pedroni at inf.ethz.ch Fri Feb 23 13:42:11 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Fri, 23 Feb 2001 13:42:11 +0100 (MET) Subject: [Python-Dev] nested scopes: I'm glad (+excuses) Message-ID: <200102231242.NAA27564@core.inf.ethz.ch> Hi. I'm really glad that the holy war has come to an end, and that a technical solution has been found. This was my first debate here and I have said few wise things, more stupid ones and some violent or unfair: my excuses go to Jeremy, Guido and the biz mind (in some of us) that make money out of software (nobody can predict how he will make his living ;)) I'm glad that we have nested scopes, a transition syntax and path and no new keyword (no irony in the latter). Cheers, Samuele. From ping at lfw.org Fri Feb 23 14:23:42 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Fri, 23 Feb 2001 05:23:42 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: Message-ID: On Thu, 22 Feb 2001, Tim Peters wrote: > That determination is done at compile-time, not runtime. In the presence of > "exec" and "import *" in some contexts, compile-time determination is > stymied and there is no runtime support for a "slow" lookup. Would the existence of said runtime support hurt anybody? Don't we already do slow lookup in some situations anyway? > Note that the restrictions are *not* against lexical nesting, they're > against particular uses of "exec" and "import *" (the latter of which is so > muddy the Ref Man said it was undefined a long, long time ago). (To want to *take away* the ability to do import-* at all, in order to protect programmers from their own bad habits, is a different argument. I think we all already agree that it's bad form. But the recent clamour has shown that we can't take it away just yet.) > There are tradeoffs here among: > > conceptual clarity > runtime efficiency > implementation complexity > rate of cyclic garbage creation > > Your message favors "conceptual clarity" over all else; the implementation > doesn't. Python also limits strings to the size of a platform int <0.9 > wink>. Yes, i do think conceptual clarity is important. The way Python leans towards conceptual simplicity is a big part of its success, i believe. The less there is for someone to fit into their brain, the less time they can spend worrying about how the language will behave and the more they can focus on getting the job done. And i don't think we have to sacrifice much of the others to do it. In fact, often conceptual clarity leads to a simpler implementation, and sometimes even a faster implementation. Now i haven't actually done the implementation so i can't tell you whether it will be faster, but it seems to me that it's likely to be simpler and could stand a chance of being faster. -- ?!ng "The only `intuitive' interface is the nipple. After that, it's all learned." -- Bruce Ediger, on user interfaces From ping at lfw.org Fri Feb 23 14:15:07 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Fri, 23 Feb 2001 05:15:07 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: <14997.53815.769191.239591@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: On Thu, 22 Feb 2001, Jeremy Hylton wrote: > I can't think of another lexically scoped language that > allows an exec or eval to create a new variable binding that can later > be used via a plain-old reference. I tried STk Scheme, guile, and elisp, and they all do this. > One of the reasons I am strongly in favor of making import * and exec > errors is that it stymies the efforts of a reader to understand the > code. Yes, i look forward to the day when no one will ever use import-* any more. I can see good reasons to discourage the use of import-* and bare-exec in general anywhere. But as long as they *do* have a meaning, they had better mean the same thing at the top level as internally. > If we look at your examples: > >>> def f(): # x = 3 inside, no g() [...] > >>> def f(): # x = 3 inside, nested g() [...] > >>> def f(): # x = 3 outside, nested g() > > In these examples, it isn't at all obvious to the reader of the code > whether the module foo contains a binding for x or whether the > programmer intended to import that name and stomp on the exist > definition. It's perfectly clear -- since we expect the reader to understand what happens when we do exactly the same thing at the top level. > Another key difference between Scheme and Python is that in Scheme, > each binding operation creates a new scope. Scheme separates 'define' and 'set!', while Python only has '='. In Scheme, multiple defines rebind variables: (define a 1) (define a 2) (define a 3) just as in Python, multiple assignments rebind variables: a = 1 a = 2 a = 3 The lack of 'set!' prevents Python from rebinding variables outside of the local scope, but it doesn't prevent Python from being otherwise consistent and having "a = 2" do the same thing inside or outside of a function: it binds a name in the current scope. -- ?!ng "The only `intuitive' interface is the nipple. After that, it's all learned." -- Bruce Ediger, on user interfaces From ping at lfw.org Fri Feb 23 12:51:19 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Fri, 23 Feb 2001 03:51:19 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: <200102230336.WAA21493@cj20424-a.reston1.va.home.com> Message-ID: On Thu, 22 Feb 2001, Guido van Rossum wrote: > > 1. "exec('x = 1')" should behave exactly the same as "x = 1" > > Sorry, no go. This just isn't a useful feature. It's not a "feature" as in "something to be added to the language". It's a consistent definition of "exec" that simplifies understanding. Without it, how do you explain what "exec" does? > > 2. "from foo import *" should do the same as "x = 1" > > But it is limiting because it hides information from the compiler, and > hence it is outlawed when it gets in the way of the compiler. Again, consistency simplifies understanding. What it "gets in the way of" is a particular optimization; it doesn't make compilation impossible. The language reference says that import binds a name in the local namespace. That means "import x" has to do the same thing as "x = 1" for some value of 1. "from foo import *" binds several names in the local scope, and so if x is bound in module foo, it should do the same thing as "x = 1" for some value of 1. When "from foo import *" makes it impossible to know at compile-time what bindings will be added to the current scope, we just do normal name lookup for that scope. No big deal. It already works that way at module scope; why should this be any different? With this simplification, there can be a single scope chain: builtins <- module <- function <- nested-function <- ... and all scopes can be treated the same. The implementation could probably be both simpler and faster! Simpler, because we don't have to have separate cases for builtins, local, and global; and faster, because some of the optimizations we currently do for locals could be made to apply at all levels. Imagine "fast globals"! And imagine getting them essentially for free. > > 3. "def g(): print x" should behave the same as "print x" > > Huh? again. Defining a function does't call it. Duh, obviously i meant 3. "def g(): print x" immediately followed by "g()" should behave the same as "print x" Do you agree with this principle, at least? > Python has always > adhered to the principle that the context where a function is defined > determines its context, not where it is called. Absolutely agreed. I've never intended to contradict this. This is the foundation of lexical scoping. -- ?!ng "Don't worry about people stealing an idea. If it's original, you'll have to jam it down their throats." -- Howard Aiken From ping at lfw.org Fri Feb 23 13:32:59 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Fri, 23 Feb 2001 04:32:59 -0800 (PST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: <200102230331.WAA21467@cj20424-a.reston1.va.home.com> Message-ID: On Thu, 22 Feb 2001, Guido van Rossum wrote: > Note that this is moot now -- see my previous post about how we've > decided to resolve this using a magical import to enable nested scopes > (in 2.1). Yes, yes. It seems like a good answer for now -- indeed, some sort of mechanism for selecting compilation options has been requested before. But we still need to eventually have a coherent answer. The chart in my other message doesn't look coherent to me -- it would take too long to explain all of the cases to someone. I deserve a smack on the head for my confusion at seeing 'x' printed out -- that happens to be the value of the NameError in 1.5.2. Here is an updated chart (updated test script is attached): 1.5.2 2.1a2 suggested toplevel with print x from foo import * 3 1 3 1 3 1 exec('x = 1') 3 1 3 1 3 1 x = 1 3 1 3 1 3 1 with g() from foo import * 3 1 3 1 3 1 exec('x = 1') 3 1 3 1 3 1 x = 1 3 1 3 1 3 1 x = 3 outside f() with print x from foo import * 3 1 3 1 3 1 exec('x = 1') 3 1 3 1 3 1 x = 1 NameError UnboundLocal 3 1 with g() from foo import * 3 3 SyntaxError 3 1 exec('x = 1') 3 3 SyntaxError 3 1 x = 1 NameError UnboundLocal 3 1 x = 3 inside f() with print x from foo import * 3 1 3 1 3 1 exec('x = 1') 3 1 3 1 3 1 x = 1 3 1 3 1 3 1 with g() from foo import * NameError SyntaxError 3 1 exec('x = 1') NameError SyntaxError 3 1 x = 1 NameError 3 1 3 1 You can see that the situation in 1.5.2 is pretty messy -- and it's precisely the inconsistent cases that have historically caused confusion. 2.1a2 is better but it still has exceptional cases -- just the cases people seem to be complaining about now. > > There is something missing from my understanding here: > > > > - The model is, each environment has a pointer to the > > enclosing environment, right? > > Actually, no. I'm talking about the model, not the implementation. I'm advocating that we think *first* about what the programmer (the Python user) has to worry about. I think that's a Pythonic perspective, isn't it? Or are you really saying that this isn't even the model that the user should be thinking about? > > - Whenever you can't find what you're looking for, you > > go up to the next level and keep looking, right? > > That depends. Our model is inspired by the semantics of locals in > Python 2.0 and before, and this all happens at compile time. Well, can we nail down what you mean by "depends"? What reasoning process should the Python programmer go through to predict the behaviour of a given program? > In particular: > > x = 1 > def f(): > print x > x = 2 > > raises an UnboundLocalError error at the point of the print I've been getting the impression that people consider this a language wart (or at least a little unfortunate, as it tends to confuse people). It's a frequently asked question, and when i've had to explain it to people they usually grumble. As others have pointed out, it can be pretty surprising when the assignment happens much later in the body. I think if you asked most people what this would do, they would expect 1. Why? Because they think about programming in terms of some simple invariants, e.g.: - Editing part of a block doesn't affect the behaviour of the block up to the point where you made the change. - When you move some code into a function and then call the function, that code still works the same. This kind of backwards-action-at-a-distance breaks the first invariant. Lexical scoping is good largely because it helps preserve the second invariant (the function carries the context of where it was defined). And so on. > No need to go to the source -- this is all clearly explained in the > PEP (http://python.sourceforge.net/peps/pep-0227.html). It seems not to be that simple, because i was unable to predict what situations would be problematic without understanding how the optimizations are implemented. * * * > > 5. The current scope is determined by the nearest enclosing 'def'. > > For most purposes, 'class' also creates a scope. Sorry, i should have written: 5. The parent scope is determined by the nearest enclosing 'def'. * * * > > Given this model, all of the scoping questions that have been > > raised have completely clear answers: > > > > Example I [...] > > Example II > You didn't try this, did you? [...] > > Example III > Wrong again. [...] > > Example IV > I didn't try this one, but I'm sure that it prints 3 in 2.1a1 and > raises the same SyntaxError as above with the current CVS version. I know that. I introduced these examples with "given this model..." to indicate that i'm describing what the "completely clear answers" are. The chart above tries to summarize all of the current behaviour. > > But the ability of the compiler to make this optimization should only > > affect performance, not affect the Python language model. > > Too late. The semantics have been bent since 1.0 or before. I think it's better to try to bend them as little as possible -- and if it's possible to unbend them to make the language easier to understand, all the better. Since we're changing the behaviour now, this is a good opportunity to make sure the model is simple. > > The model described above [...] > > exactly reflects the environment-diagram model of scoping > > as taught to most Scheme students and i would argue that it is the > > easiest to explain. > > I don't know Scheme, but isn't it supposed to be a compiled language? That's not the point. There is a scoping model that is straightforward and easy to understand, and regardless of whether the implementation is interpreted or compiled, you can easily predict what a given piece of code is going to do. > I'm not sure how you can say that Scheme sidesteps the issue when you > just quote an example where Scheme implementations differ? That's what i'm saying. The standard sidesteps (i.e. doesn't specify how to handle) the issue, so the implementations differ. I don't think we have the option of avoiding the issue; we should have a clear position on it. (And that position should be as simple to explain as we can make it.) > I see that Tim posted another rebuttal, explaining better than I do > here *why* Ping's "simple" model is not good for Python, so I'll stop > now. Let's get a complete specification of the model then. And can i ask you to clarify your position: did you put quotation marks around "simpler" because you disagree that the model i suggest is simpler and easier to understand; or did you agree that it was simpler but felt it was worth compromising that simplicity for other benefits? And if the latter, are the other benefits purely about enabling optimizations in the implementation, or other things as well? Thanks, -- ?!ng -------------- next part -------------- import sys file = open('foo.py', 'w') file.write('x = 1') file.close() toplevel = """ x = 3 print x, %s %s %s """ outside = """ x = 3 def f(): print x, %s %s %s f() """ inside = """ def f(): x = 3 print x, %s %s %s f() """ for template in [toplevel, outside, inside]: for print1, print2 in [('print x', ''), ('def g(): print x', 'g()')]: for statement in ['from foo import *', 'exec("x = 1")', 'x = 1']: code = template % (statement, print1, print2) # print code try: exec code in {} except: print sys.exc_type, sys.exc_value print From guido at digicool.com Fri Feb 23 14:58:59 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 23 Feb 2001 08:58:59 -0500 Subject: [Python-Dev] nested scopes: I'm glad (+excuses) In-Reply-To: Your message of "Fri, 23 Feb 2001 13:42:11 +0100." <200102231242.NAA27564@core.inf.ethz.ch> References: <200102231242.NAA27564@core.inf.ethz.ch> Message-ID: <200102231358.IAA23816@cj20424-a.reston1.va.home.com> > Hi. > > I'm really glad that the holy war has come to an end, and that a technical > solution has been found. Not as glad as I am, Samuele! > This was my first debate here and I have said few wise things, more stupid > ones and some violent or unfair: my excuses go to Jeremy, Guido > and the biz mind (in some of us) that make money out of software > (nobody can predict how he will make his living ;)) It wasn't my first debate (:-), but I feel the same way! > I'm glad that we have nested scopes, a transition syntax and path > and no new keyword (no irony in the latter). Me too. > Cheers, Samuele. Hope to hear from you more, Samuele! How's the Jython port of nested scopes coming? --Guido van Rossum (home page: http://www.python.org/~guido/) From nas at arctrix.com Fri Feb 23 15:36:51 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Fri, 23 Feb 2001 06:36:51 -0800 Subject: [Python-Dev] Nested scopes resolution -- you can breathe again! In-Reply-To: <200102230259.VAA19238@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Feb 22, 2001 at 09:59:26PM -0500 References: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> Message-ID: <20010223063651.B23270@glacier.fnational.com> On Thu, Feb 22, 2001 at 09:59:26PM -0500, Guido van Rossum wrote: > from __future__ import nested_scopes I this this alternative better since there is only one "reserved" module name. I still think releasing 2.0.1 with warnings is a good idea. OTOH, maybe its hard for that compiler to detect questionable code. Neil From guido at digicool.com Fri Feb 23 15:42:12 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 23 Feb 2001 09:42:12 -0500 Subject: [Python-Dev] Nested scopes resolution -- you can breathe again! In-Reply-To: Your message of "Fri, 23 Feb 2001 06:36:51 PST." <20010223063651.B23270@glacier.fnational.com> References: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> <20010223063651.B23270@glacier.fnational.com> Message-ID: <200102231442.JAA24227@cj20424-a.reston1.va.home.com> > > from __future__ import nested_scopes > > I this this alternative better since there is only one "reserved" > module name. Noted. > I still think releasing 2.0.1 with warnings is a > good idea. OTOH, maybe its hard for that compiler to detect > questionable code. The problem is that in order to do a decent job of compile-time warnings, not only the warnings module and API would have to be retrofitted in 2.0.1, but also the entire new compiler, which has the symbol table needed to be able to detect the situations we want to warn about. --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at digicool.com Fri Feb 23 16:01:43 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 23 Feb 2001 10:01:43 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) References: <14997.54747.56767.641188@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14998.31575.97664.422182@anthem.wooz.org> Excellent, Tim! Let's PEP this sucker. The only suggestion I was going to make was to use sys.hexversion instead of sys.version_info. Something about tuples-of-tuples kind of bugged me. But after composing the response to suggest this, I looked at it closely, and decided that sys.version_info is right after all. Both are equally comparable and sys.version_info is more "human friendly". -Barry From thomas at xs4all.net Fri Feb 23 16:04:47 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 23 Feb 2001 16:04:47 +0100 Subject: [Python-Dev] nested scopes: I'm glad (+excuses) In-Reply-To: <200102231242.NAA27564@core.inf.ethz.ch>; from pedroni@inf.ethz.ch on Fri, Feb 23, 2001 at 01:42:11PM +0100 References: <200102231242.NAA27564@core.inf.ethz.ch> Message-ID: <20010223160447.A16781@xs4all.nl> On Fri, Feb 23, 2001 at 01:42:11PM +0100, Samuele Pedroni wrote: > I'm really glad that the holy war has come to an end, and that a technical > solution has been found. Same here. I really like the suggested solution, just to show that I'm not adverse to progress per se ;) I also apologize for not thinking up something similar, despite thinking long and hard (not to mention posting long and especially hard ;) on the issue. I'll have to buy you all beer (or cola, or hard liquor, whatever's your poison) next week ;-) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From jeremy at alum.mit.edu Fri Feb 23 16:41:47 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 10:41:47 -0500 (EST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: References: <14997.53815.769191.239591@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14998.33979.566557.956297@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "KPY" == Ka-Ping Yee writes: KPY> On Thu, 22 Feb 2001, Jeremy Hylton wrote: >> I can't think of another lexically scoped language that allows an >> exec or eval to create a new variable binding that can later be >> used via a plain-old reference. KPY> I tried STk Scheme, guile, and elisp, and they all do this. I guess I'm just dense then. Can you show me an example? The only way to introduce a new name in Scheme is to use lambda or define which can always be translated into an equivalent letrec. The name binding is then visible only inside the body of the lambda. As a result, I don't see how eval can introduce a new name into a scope. The Python example I was thinking of is: def f(): exec "y=2" return y >>> f() 2 What would the Scheme equivalent be? The closest analog I can think of is (define (f) (eval "(define y 2)") y) The result here is undefined because y is not bound in the body of f, regardless of the eval. Jeremy From jeremy at alum.mit.edu Fri Feb 23 16:59:24 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 10:59:24 -0500 (EST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: References: <14997.53815.769191.239591@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14998.35036.311805.899392@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "KPY" == Ka-Ping Yee writes: >> Another key difference between Scheme and Python is that in >> Scheme, each binding operation creates a new scope. KPY> Scheme separates 'define' and 'set!', while Python only has KPY> '='. In Scheme, multiple defines rebind variables: Really, scheme provides lambda, the let family, define, and set!, where "define" is defined in terms of letrec except at the top level. KPY> (define a 1) KPY> (define a 2) KPY> (define a 3) Scheme distinguishes between top-level definitions and internal defintions. They have different semantics. Since we're talking about what happens inside Python functions, we should only look at what define does for internal definitions. An internal defintion is only allowed at the beginning of a body, so you're example above is equivalent to: (letrec ((a 1) (a 2) (a 3)) ...) But it is an error to have duplicate name bindings in a letrec. At least it is in MzScheme. Not sure what R5RS says about this. KPY> just as in Python, multiple assignments rebind variables: KPY> a = 1 KPY> a = 2 KPY> a = 3 Python's assignment is closer to set!, since it can occur anywhere in a body not just at the beginning. But if we say that = is equivalent to set! we've got a problem, because you can't use set! on an unbound variable. I think that leaves us with two alternatives. As I mentioned in my previous message, one is to think about each assignment in Python introducing a new scope. a = 1 (let ((a 1)) a = 2 (let ((a 2)) a = 3 (let ((a 3)) ....))) or def f(): (define (f) print a (print a) a = 2 (let ((a 2)) ...)) But I don't think it's clear to read a group of equally indented statements as a series of successively nested scopes. The other alternative is to say that = is closer to set! and that the original name binding is implicit. That is: "If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the local namespace." (ref manual, sec. 4) KPY> The lack of 'set!' prevents Python from rebinding variables KPY> outside of the local scope, but it doesn't prevent Python from KPY> being otherwise consistent and having "a = 2" do the same thing KPY> inside or outside of a function: it binds a name in the current KPY> scope. Again, if we look at Scheme as an example and compare = and define, define behaves differently at the top-level than it does inside a lambda. Jeremy From akuchlin at mems-exchange.org Fri Feb 23 17:01:41 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Fri, 23 Feb 2001 11:01:41 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: <20010222191450.B15506@thyrsus.com>; from esr@thyrsus.com on Thu, Feb 22, 2001 at 07:14:50PM -0500 References: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> <200102222326.SAA18443@cj20424-a.reston1.va.home.com> <20010222191450.B15506@thyrsus.com> Message-ID: <20010223110141.D2879@ute.cnri.reston.va.us> On Thu, Feb 22, 2001 at 07:14:50PM -0500, Eric S. Raymond wrote: >practice than it is in theory. In fact, Python has rather forced me >to question whether "No separation between code and data" was as >important a component of LISP's supernal wonderfulness as I believed >when I was a fully fervent member of the cult. I think it is. Currently I'm reading Steven Tanimoto's introductory AI book in a doomed-from-the-start attempt to learn about rule-based systems, and along the way am thinking about how I'd do similar tasks in Python. The problem is that, for applying pattern matching to data structures, Python has no good equivalent of Lisp's (pattern-match data '((? X) 1 2)). [1] Perhaps this is more a benefit of Lisp's simple syntax than the "no separation between code and data" priniciple. In Python you could write some sort of specialized parser, of course, but that's really a distraction from the primary AI task of writing a really bitchin' Eliza program (or whatever). --amk [1] Which would match any list whose 2nd and 3rd elements are (1 2), and bind the first element to X somehow. From jeremy at alum.mit.edu Fri Feb 23 17:09:23 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 11:09:23 -0500 (EST) Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: References: <200102230331.WAA21467@cj20424-a.reston1.va.home.com> Message-ID: <14998.35635.32450.338318@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "KPY" == Ka-Ping Yee writes: >> No need to go to the source -- this is all clearly explained in >> the PEP (http://python.sourceforge.net/peps/pep-0227.html). KPY> It seems not to be that simple, because i was unable to predict KPY> what situations would be problematic without understanding how KPY> the optimizations are implemented. The problematic cases are exactly those where name bindings are introduced implicitly, i.e. cases where an operation binds a name without the name appearing the program text for that operation. That doesn't sound like an implementation-dependent defintion. [...] KPY> That's not the point. There is a scoping model that is KPY> straightforward and easy to understand, and regardless of KPY> whether the implementation is interpreted or compiled, you can KPY> easily predict what a given piece of code is going to do. [Taking you a little out of context:] This is just what I'm advocating for import * and exec in the presence of nested fucntions. There is no easy way to predict what a piece of code is going to do without (a) knowing what names a module defines or (b) figuring out what values the argument to exec will have. On the subject of easy prediction, what should the following code do according to your model: x = 2 def f(y): ... if y > 3: x = x - 1 ... print x ... x = 3 ... I think the meaning of print x should be statically determined. That is, the programmer should be able to determine the binding environment in which x will be resolved (for print x) by inspection of the code. Jeremy From tim.one at home.com Fri Feb 23 17:34:58 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 11:34:58 -0500 Subject: [Python-Dev] RE: Nested scopes resolution -- you can breathe In-Reply-To: Message-ID: [Mikael Olofsson] > Naturally. More seriously though, I like > > from __future__ import something > > as an idiom. It gives us a clear reusable syntax to incorporate new > features before they are included in the standard distribution. It is > not obvious to me that the proposed alternative > > import __something__ > > is a way to incorporate something new. Bingo. That's why I'm pushing for it. Also means we only have to create one artificial module (__future__.py) for this; and besides the doc value, it occurs to me we *do* have to create a real module anyway so that masses of tools don't get confused searching for things that look like modules but don't actually exist. > Perhaps Py3k should allow > > from __past__ import something > > to give us a way to keep some functionality from 2.* that has been > (will be) changed in Py3k. Actually, I thought that's something PythonWare could implement as an extension, to seize the market opportunity created by mean old Guido breaking all the code he can on a whim . Except they'll probably have to extend the syntax a bit, to make that from __past__ import not something Maybe we should add from __future__ import __past__with_not now to make that easier for them. > explicit-is-better-than-implicit-ly y'rs otoh-implicit-manages-to-hide-explicit-suckiness-a-bit-longer-ly y'rs - tim From thomas.heller at ion-tof.com Fri Feb 23 17:36:44 2001 From: thomas.heller at ion-tof.com (Thomas Heller) Date: Fri, 23 Feb 2001 17:36:44 +0100 Subject: [Python-Dev] distutils, uninstaller Message-ID: <03f201c09db6$cf201990$e000a8c0@thomasnotebook> I've uploaded the bdist_wininst uninstaller patch to sourceforge: http://sourceforge.net/patch/?func=detailpatch&patch_id=103948&group_id=5470 Just in case someone cares. Another thing: Shouldn't the distutils version number change before the beta? I suggest going from 1.0.1 to 1.0.2. Thomas Heller From tim.one at home.com Fri Feb 23 17:44:36 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 11:44:36 -0500 Subject: [Python-Dev] RE: Other situations like this In-Reply-To: <200102231228.HAA23466@cj20424-a.reston1.va.home.com> Message-ID: [Guido] > Oops. I swear I heard you offer to write it. I guess all you said > was that it should be written. Oh well. Somebody will write it. :-) Na, I'll write it! I didn't volunteer, but since I've already thought about it more than anyone on Earth, I'm the natural vic^H^H^Hauthor. cementing-my-monopoly-on-retroactive-peps-ly y'rs - tim From tim.one at home.com Fri Feb 23 20:36:04 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 14:36:04 -0500 Subject: [Python-Dev] test_builtin failing on Windows Message-ID: But only if run under a debug build *and* passing -O to Python: > python_d -O ../lib/test/test_builtin.py Adding parser accelerators ... Done. 4. Built-in functions test_b1 __import__ abs apply callable chr cmp coerce compile complex delattr dir divmod eval execfile filter float getattr hasattr hash hex id int isinstance issubclass len long map max min test_b2 and here it blows up with some kind of memory error. Other systems? From barry at digicool.com Fri Feb 23 20:45:43 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Fri, 23 Feb 2001 14:45:43 -0500 Subject: [Python-Dev] test_builtin failing on Windows References: Message-ID: <14998.48615.952027.397301@anthem.wooz.org> >>>>> "TP" == Tim Peters writes: TP> But only if run under a debug build *and* passing -O to TP> Python: I'm currently running the regrtest under insure but only on Linux and w/o -O. -Barry From tim.one at home.com Fri Feb 23 20:58:16 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 14:58:16 -0500 Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: Message-ID: > But only if run under a debug build *and* passing -O to Python: *And* only if the .pyc/.pyo files reachable from Lib/ are deleted before running it. Starting to smell like another of those wild memory overwrite problems for efence/Insure or whatever. From tim.one at home.com Fri Feb 23 21:25:25 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 15:25:25 -0500 Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: Message-ID: > But only if run under a debug build *and* passing -O to Python: > > *And* only if the .pyc/.pyo files reachable from Lib/ are deleted > before running it. The explosion is here: static int com_make_closure(struct compiling *c, PyCodeObject *co) { int i, free = PyTuple_GET_SIZE(co->co_freevars); co-> is almost entirely filled with 0xdddddddd at this point (and in particular, that's the value of co->co_freevars, which is why it blows up). That bit pattern is the MS "dead landfill" value: when the MS debug libraries free() an object, they overwrite the space with 0xdd bytes. Here's the call stack: com_make_closure(compiling * 0x0063f5c4, PyCodeObject * 0x00a1b5b0) line 2108 + 6 bytes com_test(compiling * 0x0063f5c4, _node * 0x008470d0) line 2164 + 13 bytes com_node(compiling * 0x0063f5c4, _node * 0x008470d0 line 3452 + 13 bytes com_argument(compiling * 0x0063f5c4, _node * 0x0084a900, _object * * 0x0063f3b8) line 1516 + 16 bytes com_call_function(compiling * 0x0063f5c4, _node * 0x00847124) line 1581 + 17 bytes com_apply_trailer(compiling * 0x0063f5c4, _node * 0x008471d4) line 1764 + 19 bytes com_power(compiling * 0x0063f5c4, _node * 0x008472b0) line 1792 + 24 bytes com_factor(compiling * 0x0063f5c4, _node * 0x008472f0) line 1813 + 16 bytes com_term(compiling * 0x0063f5c4, _node * 0x00847330) line 1823 + 16 bytes com_arith_expr(compiling * 0x0063f5c4, _node * 0x00847370) line 1852 + 16 bytes com_shift_expr(compiling * 0x0063f5c4, _node * 0x008473b0) line 1878 + 16 bytes com_and_expr(compiling * 0x0063f5c4, _node * 0x008473f0) line 1904 + 16 bytes com_xor_expr(compiling * 0x0063f5c4, _node * 0x00847430) line 1926 + 16 bytes com_expr(compiling * 0x0063f5c4, _node * 0x0084a480) line 1948 + 16 bytes com_comparison(compiling * 0x0063f5c4, _node * 0x008474b0) line 2002 + 16 bytes com_not_test(compiling * 0x0063f5c4, _node * 0x008474f0) line 2077 + 16 bytes com_and_test(compiling * 0x0063f5c4, _node * 0x008475e0) line 2094 + 24 bytes com_test(compiling * 0x0063f5c4, _node * 0x0084b124) line 2178 + 24 bytes com_node(compiling * 0x0063f5c4, _node * 0x0084b124) line 3452 + 13 bytes com_if_stmt(compiling * 0x0063f5c4, _node * 0x00847620) line 2817 + 13 bytes com_node(compiling * 0x0063f5c4, _node * 0x00847620) line 3431 + 13 bytes com_file_input(compiling * 0x0063f5c4, _node * 0x007d4cc0) line 3660 + 13 bytes compile_node(compiling * 0x0063f5c4, _node * 0x007d4cc0) line 3762 + 13 bytes jcompile(_node * 0x007d4cc0, char * 0x0063f84c, compiling * 0x00000000) line 3870 + 16 bytes PyNode_Compile(_node * 0x007d4cc0, char * 0x0063f84c) line 3813 + 15 bytes parse_source_module(char * 0x0063f84c, _iobuf * 0x10261888) line 611 + 13 bytes load_source_module(char * 0x0063f9a8, char * 0x0063f84c, _iobuf * 0x10261888) line 731 + 13 bytes load_module(char * 0x0063f9a8, _iobuf * 0x10261888, char * 0x0063f84c, int 0x00000001) line 1259 + 17 bytes import_submodule(_object * 0x1e1f6ca0 __Py_NoneStruct, char * 0x0063f9a8, char * 0x0063f9a8) line 1787 + 33 bytes load_next(_object * 0x1e1f6ca0 __Py_NoneStruct, _object * 0x1e1f6ca0 __Py_NoneStruct, char * * 0x0063fabc, char * 0x0063f9a8, int * 0x0063f9a4) line 1643 + 17 bytes import_module_ex(char * 0x00000000, _object * 0x00770d6c, _object * 0x00770d6c, _object * 0x1e1f6ca0 __Py_NoneStruct) line 1494 + 35 bytes PyImport_ImportModuleEx(char * 0x007ae58c, _object * 0x00770d6c, _object * 0x00770d6c, _object * 0x1e1f6ca0 __Py_NoneStruct) line 1535 + 21 bytes builtin___import__(_object * 0x00000000, _object * 0x007716ac) line 31 + 21 bytes call_cfunction(_object * 0x00760080, _object * 0x007716ac, _object * 0x00000000) line 2740 + 11 bytes call_object(_object * 0x00760080, _object * 0x007716ac, _object * 0x00000000) line 2703 + 17 bytes PyEval_CallObjectWithKeywords(_object * 0x00760080, _object * 0x007716ac, _object * 0x00000000) line 2673 + 17 bytes eval_code2(PyCodeObject * 0x007afe10, _object * 0x00770d6c, _object * 0x00770d6c, _object * * 0x00000000, int 0x00000000, _object * * 0x00000000, int 0x00000000, _object * * 0x00000000, int 0x00000000, _object * 0x00000000) line 1767 + 15 bytes PyEval_EvalCode(PyCodeObject * 0x007afe10, _object * 0x00770d6c, _object * 0x00770d6c) line 341 + 31 bytes run_node(_node * 0x007a8760, char * 0x00760dd0, _object * 0x00770d6c, _object * 0x00770d6c) line 935 + 17 bytes run_err_node(_node * 0x007a8760, char * 0x00760dd0, _object * 0x00770d6c, _object * 0x00770d6c) line 923 + 21 bytes PyRun_FileEx(_iobuf * 0x10261888, char * 0x00760dd0, int 0x00000101, _object * 0x00770d6c, _object * 0x00770d6c, int 0x00000001) line 915 + 21 bytes PyRun_SimpleFileEx(_iobuf * 0x10261888, char * 0x00760dd0, int 0x00000001) line 628 + 30 bytes PyRun_AnyFileEx(_iobuf * 0x10261888, char * 0x00760dd0, int 0x00000001) line 467 + 17 bytes Py_Main(int 0x00000003, char * * 0x00760d90) line 296 + 44 bytes main(int 0x00000003, char * * 0x00760d90) line 10 + 13 bytes mainCRTStartup() line 338 + 17 bytes Unsurprisingly, it's importing test_b2.py at this point. So this is enough to reproduce the problem: First, make sure test_b2.pyo doesn't exist. Then > python_d -O Adding parser accelerators ... Done. Python 2.1a2 (#10, Feb 23 2001, 14:19:33) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path.insert(0, "../lib/test") [5223 refs] >>> import test_b2 Boom. Best guess is that I need a debug build to fail, because in the normal build it's still referencing free()d memory anyway, but the normal MS malloc/free don't overwrite free()d memory with trash (so the problem isn't noticed). No guess as to why -O is needed. From fdrake at acm.org Fri Feb 23 21:49:08 2001 From: fdrake at acm.org (Fred L. Drake) Date: Fri, 23 Feb 2001 15:49:08 -0500 Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: Message-ID: "Tim Peters" wrote: > Unsurprisingly, it's importing test_b2.py at this point. > So this is enough to reproduce the problem: ... > Best guess is that I need a debug build to fail, because > in the normal build > it's still referencing free()d memory anyway, but the > normal MS malloc/free > don't overwrite free()d memory with trash (so the > problem isn't noticed). > No guess as to why -O is needed. This sounds like there's a difference in when someting gets DECREFed differently when the optimizations are performed; perhaps that code hasn't kept up with the pace of change? I'm not familiar enough with that code to be able to check it quickly with any level of confidence, however. -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From tim.one at home.com Fri Feb 23 21:49:17 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 15:49:17 -0500 Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: Message-ID: The second time we get to here (in com_test, compile.c, and when running python_d -O blah/blah/test_builtin.py, and test_b2.pyo doesn't exist): co = (PyObject *) icompile(CHILD(n, 0), c); if (co == NULL) { c->c_errors++; return; } symtable_exit_scope(c->c_symtable); if (co == NULL) { c->c_errors++; i = 255; closure = 0; } else { i = com_addconst(c, co); Py_DECREF(co); ************** HERE ********* closure = com_make_closure(c, (PyCodeObject *)co); } the refcount of co is 1 before we do the Py_DECREF. Everything else follows from that. In the failing 2nd time thru this code, com_addconst finds the thing already, so com_addconst doesn't boost the refcount above 1. The code appears a bit confused regardless (e.g., it checks for co==NULL twice, but it looks impossible for the second test to succeed). From jeremy at alum.mit.edu Fri Feb 23 21:47:57 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 15:47:57 -0500 (EST) Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: References: Message-ID: <14998.52349.936778.169519@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TP" == Tim Peters writes: >> But only if run under a debug build *and* passing -O to Python: >> >> *And* only if the .pyc/.pyo files reachable from Lib/ are deleted >> before running it. I do not see a problem running a debug build with -O on Linux. Is it possible that this build does not contain the updates to compile.c *and* symtable.c that were checked in this morning? The problem you are describing sounds a little like the error I had before the symtable.c patch (which added in an INCREF) -- except that I was seeing the error with all the time. Jeremy From jeremy at alum.mit.edu Fri Feb 23 21:52:49 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 15:52:49 -0500 (EST) Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: References: Message-ID: <14998.52641.104080.334453@w221.z064000254.bwi-md.dsl.cnc.net> Yeah. The code is obviously broken. The second co==NULL test should go and the DECREF should be below the com_make_closure() call. Do you want to fix it or should I? Jeremy From tim.one at home.com Fri Feb 23 22:44:13 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 16:44:13 -0500 Subject: [Python-Dev] test_builtin failing on Windows In-Reply-To: <14998.52641.104080.334453@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: [Jeremy] > Yeah. The code is obviously broken. The second co==NULL test should > go and the DECREF should be below the com_make_closure() call. Do you > want to fix it or should I? I'll do it: a crash isn't easy to provoke without the MS debug landfill behavior, so it's easiest for me to test it. all's-well-that-ends-ly y'rs - tim From thomas at xs4all.net Fri Feb 23 22:46:26 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 23 Feb 2001 22:46:26 +0100 Subject: [Python-Dev] OS2 support ? Message-ID: <20010223224626.C16781@xs4all.nl> Is OS2 still supported at all ? I noticed this, in PC/os2vacpp/config.h: /* Provide a default library so writers of extension modules * won't have to explicitly specify it anymore */ #pragma library("Python15.lib") -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Fri Feb 23 22:56:05 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 16:56:05 -0500 Subject: [Python-Dev] Is outlawing-nested-import-* only an implementation issue? In-Reply-To: <14998.35635.32450.338318@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: I hate to be repetitive , but forget Scheme! Scheme has nothing like "import *" or Python's flavor of eval/exec. The only guidance we'll get there is that the Scheme designers were so put off by mixing lexical scoping with eval that even *referencing* non-toplevel vars inside eval's argument isn't supported. hmm-on-second-thought-let's-pay-a-lot-of-attention-to-scheme<0.6-wink>-ly y'rs - tim From guido at digicool.com Fri Feb 23 23:08:22 2001 From: guido at digicool.com (Guido van Rossum) Date: Fri, 23 Feb 2001 17:08:22 -0500 Subject: [Python-Dev] OS2 support ? In-Reply-To: Your message of "Fri, 23 Feb 2001 22:46:26 +0100." <20010223224626.C16781@xs4all.nl> References: <20010223224626.C16781@xs4all.nl> Message-ID: <200102232208.RAA32475@cj20424-a.reston1.va.home.com> > Is OS2 still supported at all ? Good question. Does anybody still care about OS/2? There's a Python for OS/2 homepage here: http://warped.cswnet.com/~jrush/python_os2/index.html but it is still at 1.5.2. I don't know of that was built with the sources in PC/os2vacpp/... Maybe you can ask Jeff Rush? --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Fri Feb 23 23:18:26 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 17:18:26 -0500 Subject: [Python-Dev] OS2 support ? In-Reply-To: <20010223224626.C16781@xs4all.nl> Message-ID: [Thomas Wouters] > Is OS2 still supported at all ? Not by me, and, AFAIK, not by anyone else either. Looks like nobody touched it in 2 1/2 years, and a "Jeff Rush" is the only one who ever did. From jeremy at alum.mit.edu Fri Feb 23 23:30:11 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 17:30:11 -0500 (EST) Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: References: <14997.54747.56767.641188@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> Couple of issues the come to mind about __future__: 1 Should this work? if x: from __future__ import nested_scopes I presume not, but the sketch of the rules you posted earlier presumably allow it. 2. How should the interactive interpreter be handled? I presume if you type >>> from __future__ import nested_scopes That everything thereafter will be compiled with nested scopes. This ends up being a little tricky, because the interpreter has to hang onto this information and tell the compiler about it. Jeremy From tim.one at home.com Fri Feb 23 23:56:39 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 17:56:39 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: [Jeremy] > 1 Should this work? > > if x: > from __future__ import nested_scopes > > I presume not, but the sketch of the rules you posted earlier > presumably allow it. You have to learn to think more like tabnanny: "module scope" obviously means "indent level 0" if you're obsessed with whitespace . > 2. How should the interactive interpreter be handled? You're kidding. I thought we agreed to drop the interactive interpreter for 2.1? (Let's *really* give 'em something to carp about ...) > I presume if you type > >>> from __future__ import nested_scopes > > That everything thereafter will be compiled with nested scopes. That's my guess too, of course. > This ends up being a little tricky, because the interpreter has to > hang onto this information and tell the compiler about it. Ditto for python -i some_script.py where some_script.py contains a magical import. OTOH, does exec-compiled (or execfile-ed) code start with a clean slate, or inherent the setting of the module from which it's exec[file]'ed? I think the latter has to be true. Could get messy, so it's a good thing we've got several whole days to work out the kinks ... business-as-usual-ly y'rs - tim From jeremy at alum.mit.edu Sat Feb 24 00:00:59 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 18:00:59 -0500 (EST) Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: References: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TP" == Tim Peters writes: TP> [Jeremy] >> 1 Should this work? >> >> if x: from __future__ import nested_scopes >> >> I presume not, but the sketch of the rules you posted earlier >> presumably allow it. TP> You have to learn to think more like tabnanny: "module scope" TP> obviously means "indent level 0" if you're obsessed with TP> whitespace . Hmmmm... I'm not yet sure how to deduce indent level 0 inside the parser. Were we going to allow? try: from __future__ import curly_braces except ImportError: ... Jeremy From pf at artcom-gmbh.de Sat Feb 24 00:01:09 2001 From: pf at artcom-gmbh.de (Peter Funk) Date: Sat, 24 Feb 2001 00:01:09 +0100 (MET) Subject: [Python-Dev] RE: Please use __progress__ instead of __future__ (was Other situations like this) In-Reply-To: from Tim Peters at "Feb 23, 2001 3:24:48 am" Message-ID: Hi, Tim Peters: [...] > Any statement of the form > > from __future__ import shiny > > becomes unnecessary as soon as shiny's future arrives, at which point the > statement can be removed. The statement is necessary only so long as shiny > *is* in the future. So the name is thoroughly appropriate. [...] Obviously you assume, that software written in Python will be bundled only with one certain version of the Python interpreter. This might be true for Windows, where Python is no integral part of base operating system. Not so for Linux: There application developers have to support a range of versions covering at least 3 years, if they don't want to start fighting against the preinstalled Python. A while ago I decided to drop the support for Python 1.5.1 and earlier in our software. This has bitten me bad: Upgrading the Python 1.5.1 installation to 1.5.2 on SuSE Linux 6.0 machine at a customer site resulted in a nightmare. Obviously I would have saved half of the night, if I had decided to install a development system (GCC, libs ...) there and would have Python recompiled from source instead of trying to incrementally upgrade parts of the system using the precompiled binary RPMs delivered by SuSE). Now I have learned my lessons and I will not drop support for 1.5.2 until 2003. BTW: SuSE will start to ship SuSE Linux 7.1 just now in the US (it is available here since Feb 10th). AFAIK this is the first Linux distribution coming with Python 2.0 as the default Python. Every other commercially used Linux system out there probably has Python 1.5.2 or older. > Given the rules I already posted, it will be very easy to write a Python > tool to identify obsolete __future__ imports and remove them (if you want). [...] Hmmm... If my Python apps have to support for example Python from version 2.1 up to 2.5 or 2.6 in 2003, I certainly have to leave the 'from __future__ import ...'-statements alone and can't remove them without sacrifying backward compatibility to the Python interpreter which made this feature available for the first time. At this time __future__ will contain features, that are 2.5 years old. BTW: We will abstain from using string methods, augmented assignments and list compr. for at least the next two years out of similar reasons. On the other hand I would never bother with IO-Port hacking to get a 200Hz and 1.5 second long "beep" out of the PC builtin speaker... ;-) Have a nice weekend and good night, Peter From akuchlin at mems-exchange.org Sat Feb 24 00:09:37 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Fri, 23 Feb 2001 18:09:37 -0500 Subject: [Python-Dev] Re: [Distutils] distutils, uninstaller In-Reply-To: <03f201c09db6$cf201990$e000a8c0@thomasnotebook>; from thomas.heller@ion-tof.com on Fri, Feb 23, 2001 at 05:36:44PM +0100 References: <03f201c09db6$cf201990$e000a8c0@thomasnotebook> Message-ID: <20010223180937.A5178@ute.cnri.reston.va.us> On Fri, Feb 23, 2001 at 05:36:44PM +0100, Thomas Heller wrote: >I've uploaded the bdist_wininst uninstaller >patch to sourceforge: >http://sourceforge.net/patch/?func=detailpatch&patch_id=103948&group_id=5470 Can anyone take a look at the patch just as a sanity check? I can't really comment on it, but if someone else gives it a look, Thomas can go ahead and check it in. >Another thing: Shouldn't the distutils version number change >before the beta? I suggest going from 1.0.1 to 1.0.2. Good point. It doesn't look like beta1 will be happening until late next week due to the nested scoping changes, but I'll do that before the release. --amk From pedroni at inf.ethz.ch Sat Feb 24 00:16:55 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Sat, 24 Feb 2001 00:16:55 +0100 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) References: Message-ID: <005801c09dee$b7fc0ca0$f979fea9@newmexico> Hi. [Tim Peters] > > 2. How should the interactive interpreter be handled? > > You're kidding. I thought we agreed to drop the interactive interpreter for > 2.1? (Let's *really* give 'em something to carp about ...) > > > I presume if you type > > >>> from __future__ import nested_scopes > > > > That everything thereafter will be compiled with nested scopes. > > That's my guess too, of course. > > > This ends up being a little tricky, because the interpreter has to > > hang onto this information and tell the compiler about it. > > Ditto for > > python -i some_script.py This make sense but I guess people will ask for a way to disable the feature after a while in the session, even trickier. > where some_script.py contains a magical import. OTOH, does exec-compiled > (or execfile-ed) code start with a clean slate, or inherent the setting of > the module from which it's exec[file]'ed? I think the latter has to be > true. I disagree, although this reduces the number of places where one has to delete from __future__ import when _future_ is here, for some uses of execfile the original program has just little control over what is in the executed file I guess, better having people being explicit there about what they want. And this way we don't have to invent a way for forcing disabling the feature (at least not because of the inherited default problems). exec should not be that different. Or we need an even more complicated mechanismus? like your proposed import not. regards, Samuele Pedroni. From thomas at xs4all.net Sat Feb 24 00:26:51 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sat, 24 Feb 2001 00:26:51 +0100 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Fri, Feb 23, 2001 at 06:00:59PM -0500 References: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010224002651.D16781@xs4all.nl> On Fri, Feb 23, 2001 at 06:00:59PM -0500, Jeremy Hylton wrote: > Hmmmm... I'm not yet sure how to deduce indent level 0 inside the > parser. Uhm, why are we adding that restriction anyway, if it's hard for the parser/compiler to detect it ? I think I'd like to put them in try/except or if/else clauses, for fully portable code. While on the subject, a way to distinguish between '__future__ not found' and '__future__.feature not found', other than hardcoding the minimal version might be nice. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From mwh21 at cam.ac.uk Sat Feb 24 01:10:00 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 24 Feb 2001 00:10:00 +0000 Subject: [Python-Dev] RE: Please use __progress__ instead of __future__ (was Other situations like this) In-Reply-To: "Tim Peters"'s message of "Fri, 23 Feb 2001 03:24:48 -0500" References: Message-ID: "Tim Peters" writes: > [Peter Funk] > > I believe __future__ is a bad name. What appears today as the bright > > shining future will be the distant dusty past of tomorrow. But the > > name of the module is not going to change anytime soon. right? > > The name of what module? > > Any statement of the form > > from __future__ import shiny > > becomes unnecessary as soon as shiny's future arrives, at which point the > statement can be removed. The statement is necessary only so long as shiny > *is* in the future. So the name is thoroughly appropriate. Ever been to Warsaw? There's the Old Town, which was built around 1650. Then there's the New Town, which was built around 1700. (The dates may be wrong). I think this is what Peter was talking about. also-see-New-College-Oxford-ly y'rs M. -- MAN: How can I tell that the past isn't a fiction designed to account for the discrepancy between my immediate physical sensations and my state of mind? -- The Hitch-Hikers Guide to the Galaxy, Episode 12 From mwh21 at cam.ac.uk Sat Feb 24 01:14:52 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 24 Feb 2001 00:14:52 +0000 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: "Eric S. Raymond"'s message of "Thu, 22 Feb 2001 19:14:50 -0500" References: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> <200102222326.SAA18443@cj20424-a.reston1.va.home.com> <20010222191450.B15506@thyrsus.com> Message-ID: "Eric S. Raymond" writes: > Guido van Rossum : > > > > Language theorists love [exec]. > > > > > > Really? I'd have thought language theorists would be the ones > > > who hate it, given all the problems it causes... > > > > Depends on where they're coming from. Or maybe I should have said > > Lisp folks... > > You are *so* right, Guido! :-) I almost commented about this in reply > to Greg's post earlier. > > Crusty old LISP hackers like me tend to be really attached to being > able to (a) lash up S-expressions that happen to be LISP function calls on > the fly, and then (b) hand them to eval. "No separation between code > and data" is one of the central dogmas of our old-time religion. Really? I thought the "no separation between code and data" thing more referred to macros than anything else. Having the full language around at compile time is one of the things that really separates Common Lisp from anything else. I don't think I've ever used #'eval in CL code - it tends to bugger up efficiency even more than the Python version does, for one thing! (eval-when (:compile-toplevel))-ly y'rs M. -- In many ways, it's a dull language, borrowing solid old concepts from many other languages & styles: boring syntax, unsurprising semantics, few automatic coercions, etc etc. But that's one of the things I like about it. -- Tim Peters, 16 Sep 93 From esr at thyrsus.com Sat Feb 24 01:21:39 2001 From: esr at thyrsus.com (Eric S. Raymond) Date: Fri, 23 Feb 2001 19:21:39 -0500 Subject: [Python-Dev] Backwards Incompatibility In-Reply-To: ; from mwh21@cam.ac.uk on Sat, Feb 24, 2001 at 12:14:52AM +0000 References: <200102222321.MAA01483@s454.cosc.canterbury.ac.nz> <200102222326.SAA18443@cj20424-a.reston1.va.home.com> <20010222191450.B15506@thyrsus.com> Message-ID: <20010223192139.A10945@thyrsus.com> Michael Hudson : > > Crusty old LISP hackers like me tend to be really attached to being > > able to (a) lash up S-expressions that happen to be LISP function calls on > > the fly, and then (b) hand them to eval. "No separation between code > > and data" is one of the central dogmas of our old-time religion. > > Really? I thought the "no separation between code and data" thing > more referred to macros than anything else. Another implication; and, as you say, more often actually useful. -- Eric S. Raymond Gun Control: The theory that a woman found dead in an alley, raped and strangled with her panty hose, is somehow morally superior to a woman explaining to police how her attacker got that fatal bullet wound. -- L. Neil Smith From tim.one at home.com Sat Feb 24 01:48:50 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 19:48:50 -0500 Subject: [Python-Dev] RE: Please use __progress__ instead of __future__ (was Other situations like this) In-Reply-To: Message-ID: [Tim] > Any statement of the form > > from __future__ import shiny > > becomes unnecessary as soon as shiny's future arrives, at which > point the statement can be removed. The statement is necessary > only so long as shiny *is* in the future. So the name is > thoroughly appropriate. [Peter Funk] > Obviously you assume, that software written in Python will be bundled > only with one certain version of the Python interpreter. Not really. I think it's more the case that you're viewing this gimmick through the eyes of your particular problems, and criticizing because it don't solve them. However, it wasn't intended to solve them. > This might be rue for Windows, where Python is no integral part of > base operating system. Not so for Linux: There application > developers have to support a range of versions covering at least > 3 years, if they don't want to start fighting against the preinstalled > Python. It's not true that Windows is devoid of compatibility problems. But Windows Python takes a different approach: we even rename the Windows Python DLLs with each release. That way any number of incompatible Pythons can coexist peacefully (this isn't trivial under Windows, because we have to install the core DLL in a specific magic directory). A serious Python app developed for Windows generally ships with the specific Python it wants, too (not unique to Python, of course, serious apps of all kinds ship with the support softare they need on Windows, up to and sometimes even including the basic MS C runtime libs). How people on other OSes choose to deal with this is up to them. If you find the Linux approach lacking, I believe you, but the "magical import" mechanism is too feeble a base on which to pin your hopes. Get serious about this! Write a PEP that will truly address your problems. This one does not; I don't even see that it's *related* to your problems. > ... > BTW: SuSE will start to ship SuSE Linux 7.1 just now in the US (it > is available here since Feb 10th). AFAIK this is the first Linux > distribution coming with Python 2.0 as the default Python. Every other > commercially used Linux system out there probably has Python 1.5.2 > or older. Yet another reason to prefer Windows . > ... > Hmmm... If my Python apps have to support for example Python from > version 2.1 up to 2.5 or 2.6 in 2003, I certainly have to leave the > 'from __future__ import ...'-statements alone and can't remove them > without sacrifying backward compatibility to the Python interpreter > which made this feature available for the first time. The only way to write a piece of code that runs under all of 2.1 thru 2.6 is to avoid any behavior whatsoever that's specific to some proper subset of those versions. That's hard, and I don't think "from __future__" even *helps* with that. But it wasn't meant to. It was meant to make life easier for people who *do* upgrade in a timely fashion, in accord with at least the spirit of the existing PEPs on the topic. > At this time __future__ will contain features, that are 2.5 years > old. And ...? That is, what of it? In 1000 years, it will contain features that are 1000 years old. So? Else code written now and never purged of obsolete __future__s would break 1000 years from now. You can fault the scheme on many bases, but not on the basis that it creates new incompatibility problems. Leaving the old __future__s in will help a little in the other direction: code that announces it relies on a __future__ F will reliably fail at compile-time if run under a release less than F's OptionalRelease value. > BTW: We will abstain from using string methods, augmented assignments > and list compr. for at least the next two years out of similar reasons. If that's the best you think can you do, so it goes. It would be nice to think of a better way. But this isn't the right gimmick, and that it doesn't solve your problems doesn't mean it fails to solve anyone's problems. > On the other hand I would never bother with IO-Port hacking to get a > 200Hz and 1.5 second long "beep" out of the PC builtin speaker... ;-) That's compatibility: it worked before under NT and 2000, but not under Win9X, and it has high newbie appeal (I dove it into after making excuses about Win9X Beep() for the umpteenth time on the Tutor list). If you want to make Linux attractive to newbies, implementing Beep() for it too would be an excellent step. If you like, I'll reserve from __future__ import MakeLinuxBearableForNewbies right now . From pedroni at inf.ethz.ch Sat Feb 24 02:02:53 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Sat, 24 Feb 2001 02:02:53 +0100 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) References: Message-ID: <004501c09dfd$9c926360$f979fea9@newmexico> After maybe too short thinking here's an idea along the line keep it simple: 1 ) from __future__ import foofeature * I imagine is more for semantic and syntax changes so it is better if near too the code assuming or needing them. So there should be no defaults, each compilation unit (module, exec string, ...) that need the feature should explicitly contain the from import ... (at least for hard-coded execs I see few need to require nested scopes in them so that's not a big problem, for other future features I don't know). * It should be allowed only at module scope indent 0, all post 2.1 compiler will be able to deal with __future__, so putting a try around the import make few sense, a compile-time error will be issued if the feature is not supported. For pre 2.1 compiler I see few possibilities of writing backward compatible code using the from __future__ import , unless one want following to work: try: from __future__ import foofeature # code needing new syntax or assuming new semantic except ImportError: # old style code if the change does not involve syntax this code will work with a pre 2.1 compiler, but >2.1 compilers should be able to recognize the idiom or use some kind of compile-time evalutation, both IMO will require a bunch of special rules and are not that easy to implement. Backward and more compiler friendly code can be written using package or module wrappers: try: import __future__ # check if feature is there from module_using_fetature import * # this will contain from __future__ import feature execpt ImportError: from module_not_using_feature import * 2) interactive mode: * respecting the above rules >>>from __future__ import featujre will activate the feature only in the one-line compilation unit => it has no effect, this can be confusing but it's a coherent behaviour, the other way people will be tempted to ask why importing a feature in a file does not influence the others... At the moment I see two solutions: - supporting the following idiom (I mean everywhere): at top-level indent 0 if 1: from __future__ import foofeature .... - having a cmd-line switch that says what futures are on for the compilation units entered at top-level in an interactive session. This is just a sketch and a material for further reflection. OTOH the implicit other proposal is that if code X will endup being executed having its global namespaces containing a feature cookie coming from __future__ because of an explicit "from import" or because so is the global namespace passed to exec,etc . ; then X should be compiled with the feature on. regards, Samuele Pedroni From jeremy at alum.mit.edu Sat Feb 24 00:30:32 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Fri, 23 Feb 2001 18:30:32 -0500 (EST) Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <20010224002651.D16781@xs4all.nl> References: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> <20010224002651.D16781@xs4all.nl> Message-ID: <14998.62104.55786.683789@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "TW" == Thomas Wouters writes: TW> On Fri, Feb 23, 2001 at 06:00:59PM -0500, Jeremy Hylton wrote: >> Hmmmm... I'm not yet sure how to deduce indent level 0 inside >> the parser. TW> Uhm, why are we adding that restriction anyway, if it's hard for TW> the parser/compiler to detect it ? I think I'd like to put them TW> in try/except or if/else clauses, for fully portable code. We want this to be a simple compiler directive, rather than something that can be turned on or off at runtime. If it were allowed inside an if/else statement, the compiler, it would become something more like a runtime flag. It sounds like you want the feature to be enabled only if the import is actually executed. But that can't work for compile-time directives, because the code has got to be compiled before we find out if the statement is executed. The restriction eliminates weird cases where it makes no sense to use this feature. Why try to invent a meaning for the nonsense code: if 0: from __future__ import nested_scopes TW> While TW> on the subject, a way to distinguish between '__future__ not TW> found' and '__future__.feature not found', other than hardcoding TW> the minimal version might be nice. There will definitely be a difference! Presumably all versions of Python after and including 2.1 will know about __future__. In those cases, the compiler will complain if feature is no defined. The complaint can be fairly specific: "__future__ feature curly_braces is not defined." In Python 2.0 and earlier, you'll just get an ImportError: No module named __future__. I'm assuming the compiler won't need to know anything about the values that are bound in __future__. It will just check to see whether the name is defined. Jeremy From tim.one at home.com Sat Feb 24 02:18:09 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 20:18:09 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <005801c09dee$b7fc0ca0$f979fea9@newmexico> Message-ID: >> Ditto for >> >> python -i some_script.py [Samuele Pedroni] > This make sense but I guess people will ask for a way to disable > the feature after a while in the session, even trickier. The purpose is to let interested people use new features "early", not to let people jerk off. That is, they can ask all they want . >> [Tim sez exec and execfile should inherit the module's setting] > I disagree, although this reduces the number of places where one > has to delete from __future__ import when _future_ is here, That isn't the intent. The intent is that a module containing from __future__ import f is announcing it *wants* future semantics for f. Therefore the module should act, in *all* respects (incl. exec and execfile), as if the release were already the future one in which f is no longer optional. If exec, eval or execfile continue to act like the older release, the module isn't getting the semantics it specifically asked for, and the user isn't getting a correct test of future functionality. > for some uses of execfile the original program has just little > control over what is in the executed file I guess, Then they may have deeper problems than this gimmick can address, but they're not going to find out whether the files they're execfile'ing *will* have a problem in the future unless the module asking for future semantics gets future semantics. > better having people being explicit there about what they want. They already are being explicit: they get future semantics when and only when they include a from__future__ thingie. > And this way we don't have to invent a way for forcing disabling > the feature (at least not because of the inherited > default problems). There is *no* intent here that a single module be able to pick and choose different behaviors in different contexts. The purpose is to allow early testing and development of code to ensure it will work correctly in a future release. That's all. > ... > Or we need an even more complicated mechanismus? like your > proposed import not. I doubt core Python will ever support "moving back in time" (a heavily conditionalized code base is much harder to maintain -- ask Jeremy how much fun he's having trying to make this optional *now*). May (or may not) be an interesting idea for repackagers to consider, though. From tim.one at home.com Sat Feb 24 02:23:19 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 20:23:19 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: Jeremy] > Hmmmm... I'm not yet sure how to deduce indent level 0 inside the > parser. > > Were we going to allow? > > try: > from __future__ import curly_braces > except ImportError: > ... Sounds like that's easier to implement <0.5 wink>. Sure. So let's take the human view of "module-level" instead of the tabnanny view after all. That way I don't have to change the words in the proto-PEP either . That means: if x: from __future__ import nested_scopes should work too. Does it also mean exec "from __future__ import nested_scopes\n" should work? No. From tim.one at home.com Sat Feb 24 03:07:32 2001 From: tim.one at home.com (Tim Peters) Date: Fri, 23 Feb 2001 21:07:32 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <20010224002651.D16781@xs4all.nl> Message-ID: [Jeremy Hylton] > Hmmmm... I'm not yet sure how to deduce indent level 0 inside the > parser. [Thomas Wouters] > Uhm, why are we adding that restriction anyway, if it's hard for the > parser/compiler to detect it ? I talked with Jeremy, and turns out it's not. > I think I'd like to put them in try/except or if/else clauses, for > fully portable code. And, sorry, but I take back saying that we should allow that. We shouldn't. Despite that it looks like an import statement (and actually *is* one, for that matter), the key info is extracted at compile time. So in stuff like if x: from __future__ import alabaster_weenoblobs whether or not alabaster_weenoblobs is in effect has nothing to do with the runtime value of x. So it's plain Bad to allow it to look as if it did. The only stuff that can textually precede: from __future__ import f is: + The module docstring (if any). + Comments. + Blank lines. + Other instances of from __future__. This also makes clear that one of these things applies to the entire module. Again, the thrust of this is *not* to help in writing portable code. It's to help users upgrade to the next release, in two ways: (1) by not breaking their code before the next release; and, (2) to let them migrate their code to next-release semantics incrementally. Note: "next release" means whatever MandatoryRelease is associated with the feature of interest. "Cross version portable code" is a more pressing problem for some, but is outside the scope of this gimmick. *This* gimmick is something we can actually do <0.5 wink>. From thomas at xs4all.net Sat Feb 24 04:34:23 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Sat, 24 Feb 2001 04:34:23 +0100 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <14998.62104.55786.683789@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Fri, Feb 23, 2001 at 06:30:32PM -0500 References: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> <20010224002651.D16781@xs4all.nl> <14998.62104.55786.683789@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <20010224043423.F16781@xs4all.nl> On Fri, Feb 23, 2001 at 06:30:32PM -0500, Jeremy Hylton wrote: > >>>>> "TW" == Thomas Wouters writes: > TW> On Fri, Feb 23, 2001 at 06:00:59PM -0500, Jeremy Hylton wrote: > >> Hmmmm... I'm not yet sure how to deduce indent level 0 inside > >> the parser. > TW> Uhm, why are we adding that restriction anyway, if it's hard for > TW> the parser/compiler to detect it ? I think I'd like to put them > TW> in try/except or if/else clauses, for fully portable code. > If it were allowed inside an if/else statement, the compiler, it would > become something more like a runtime flag. It sounds like you want the > feature to be enabled only if the import is actually executed. But that > can't work for compile-time directives, because the code has got to be > compiled before we find out if the statement is executed. Right, I don't really want them in if/else blocks, you're right. Try/except would be nice, though. > TW> While > TW> on the subject, a way to distinguish between '__future__ not > TW> found' and '__future__.feature not found', other than hardcoding > TW> the minimal version might be nice. > There will definitely be a difference! > Presumably all versions of Python after and including 2.1 will know > about __future__. In those cases, the compiler will complain if > feature is no defined. The complaint can be fairly specific: > "__future__ feature curly_braces is not defined." Will this be a warning, or an error/exception ? Must-stop-working-sleep-is-calling-ly y'rs, ;) -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From tim.one at home.com Sat Feb 24 06:51:57 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 24 Feb 2001 00:51:57 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: <14998.31575.97664.422182@anthem.wooz.org> Message-ID: Gimme a PEP number, and I'll post this to the real users too . PEP: ? Title: Back to the __future__ Version: $Revision: 1.0 $ Author: Tim Peters Python-Version: 2.1 Status: ? Type: Standards Track Post-History: Motivation From time to time, Python makes an incompatible change to the advertised semantics of core language constructs, or changes their accidental (implementation-dependent) behavior in some way. While this is never done capriciously, and is always done with the aim of improving the language over the long term, over the short term it's contentious and disrupting. The "Guidelines for Language Evolution" PEP [1] suggests ways to ease the pain, and this PEP introduces some machinery in support of that. The "Statically Nested Scopes" PEP [2] is the first application, and will be used as an example here. Intent When an incompatible change to core language syntax or semantics is being made: 1. The release C that introduces the change does not change the syntax or semantics by default. 2. A future release R is identified in which the new syntax or semantics will be enforced. 3. The mechanisms described in the "Warning Framework" PEP [3] are used to generate warnings, whenever possible, about constructs or operations whose meaning may[4] change in release R. 4. The new future_statement (see below) can be explicitly included in a module M to request that the code in module M use the new syntax or semantics in the current release C. So old code continues to work by default, for at least one release, although it may start to generate new warning messages. Migration to the new syntax or semantics can proceed during that time, using the future_statement to make modules containing it act as if the new syntax or semantics were already being enforced. Syntax A future_statement is simply a from/import statement using the reserved module name __future__: future_statement: "from" "__future__" "import" feature ["as" name] ("," feature ["as" name])* feature: identifier In addition, all future_statments must appear near the top of the module. The only lines that can appear before a future_statement are: + The module docstring (if any). + Comments. + Blank lines. + Other future_statements. Example: """This is a module docstring.""" # This is a comment, preceded by a blank line and followed by # a future_statement. from __future__ import nested_scopes from math import sin from __future__ import alabaster_weenoblobs # compile-time error! # That was an error because preceded by a non-future_statement. Semantics A future_statement is recognized and treated specially at compile time: changes to the semantics of core constructs are often implemented by generating different code. It may even be the case that a new feature introduces new incompatible syntax (such as a new reserved word), in which case the compiler may need to parse the module differently. Such decisions cannot be pushed off until runtime. For any given release, the compiler knows which feature names have been defined, and raises a compile-time error if a future_statement contains a feature not known to it[5]. The direct runtime semantics are the same as for any import statement: there is a standard module __future__.py, described later, and it will be imported in the usual way at the time the future_statement is executed. The *interesting* runtime semantics depend on the feature(s) "imported" by the future_statement(s) appearing in the module. Since a module M containing a future_statement naming feature F explicitly requests that the current release act like a future release with respect to F, any code interpreted dynamically from an eval, exec or execfile executed by M will also use the new syntax or semantics associated with F. A future_statement appearing "near the top" (see Syntax above) of code interpreted dynamically by an exec or execfile applies to the code block executed by the exec or execfile, but has no further effect on the module that executed the exec or execfile. Note that there is nothing special about the statement: import __future__ [as name] That is not a future_statement; it's an ordinary import statement, with no special syntax restrictions or special semantics. Interactive shells may pose special problems. The intent is that a future_statement typed at an interactive shell prompt affect all code typed to that shell for the remaining life of the shell session. It's not clear how to achieve that. Example Consider this code, in file scope.py: x = 42 def f(): x = 666 def g(): print "x is", x g() f() Under 2.0, it prints: x is 42 Nested scopes[2] are being introduced in 2.1. But under 2.1, it still prints x is 42 and also generates a warning. In 2.2, and also in 2.1 *if* "from __future__ import nested_scopes" is included at the top of scope.py, it prints x is 666 Standard Module __future__.py Lib/__future__.py is a real module, and serves three purposes: 1. To avoid confusing existing tools that analyze import statements and expect to find the modules they're importing. 2. To ensure that future_statements run under releases prior to 2.1 at least yield runtime exceptions (the import of __future__ will fail, because there was no module of that name prior to 2.1). 3. To document when incompatible changes were introduced, and when they will be-- or were --made mandatory. This is a form of executable documentation, and can be inspected programatically via importing __future__ and examining its contents. Each statment in __future__.py is of the form: FeatureName = ReleaseInfo ReleaseInfo is a pair of the form: (OptionalRelease, MandatoryRelease) where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples of the same form as sys.version_info: (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int PY_MINOR_VERSION, # the 1; an int PY_MICRO_VERSION, # the 0; an int PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string PY_RELEASE_SERIAL # the 3; an int ) OptionalRelease records the first release in which from __future__ import FeatureName was accepted. In the case of MandatoryReleases that have not yet occurred, MandatoryRelease predicts the release in which the feature will become part of the language. Else MandatoryRelease records when the feature became part of the language; in releases at or after that, modules no longer need from __future__ import FeatureName to use the feature in question, but may continue to use such imports. MandatoryRelease may also be None, meaning that a planned feature got dropped. No line will ever be deleted from __future__.py. Example line: nested_scopes = (2, 1, 0, "beta", 1), (2, 2, 0, "final", 0) This means that from __future__ import nested_scopes will work in all releases at or after 2.1b1, and that nested_scopes are intended to be enforced starting in release 2.2. Questions and Answers Q: What about a "from __past__" version, to get back *old* behavior? A: Outside the scope of this PEP. Seems unlikely to the author, though. Write a PEP if you want to pursue it. Q: What about incompatibilites due to changes in the Python virtual machine? A: Outside the scope of this PEP, although PEP 5[1] suggests a grace period there too, and the future_statement may also have a role to play there. Q: What about incompatibilites due to changes in Python's C API? A: Outside the scope of this PEP. Q: I want to wrap future_statements in try/except blocks, so I can use different code depending on which version of Python I'm running. Why can't I? A: Sorry! try/except is a runtime feature; future_statements are primarily compile-time gimmicks, and your try/except happens long after the compiler is done. That is, by the time you do try/except, the semantics in effect for the module are already a done deal. Since the try/except wouldn't accomplish what it *looks* like it should accomplish, it's simply not allowed. We also want to keep these special statements very easy to find and to recognize. Note that you *can* import __future__ directly, and use the information in it, along with sys.version_info, to figure out where the release you're running under stands in relation to a given feature's status. Q: Going back to the nested_scopes example, what if release 2.2 comes along and I still haven't changed my code? How can I keep the 2.1 behavior then? A: By continuing to use 2.1, and not moving to 2.2 until you do change your code. The purpose of future_statement is to make life easier for people who keep keep current with the latest release in a timely fashion. We don't hate you if you don't, but your problems are much harder to solve, and somebody with those problems will need to write a PEP addressing them. future_statement is aimed at a different audience. Copyright This document has been placed in the public domain. References and Footnotes [1] http://python.sourceforge.net/peps/pep-0005.html [2] http://python.sourceforge.net/peps/pep-0227.html [3] http://python.sourceforge.net/peps/pep-0230.html [4] Note that this is "may" and not "will": better safe than sorry. Of course spurious warnings won't be generated when avoidable with reasonable cost. [5] This ensures that a future_statement run under a release prior to the first one in which a given feature is known (but >= 2.1) will raise a compile-time error rather than silently do a wrong thing. If transported to a release prior to 2.1, a runtime error will be raised because of the failure to import __future__ (no such module existed in the standard distribution before the 2.1 release, and the double underscores make it a reserved name). Local Variables: mode: indented-text indent-tabs-mode: nil End: From tim.one at home.com Sat Feb 24 07:06:30 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 24 Feb 2001 01:06:30 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: <20010224043423.F16781@xs4all.nl> Message-ID: [Thomas Wouters] > ... > Right, I don't really want them in if/else blocks, you're right. > Try/except would be nice, though. Can you give a specific example of why it would be nice? Since this is a compile-time gimmick, I can't imagine that it would do anything but confuse the essential nature of this gimmick. Note that you *can* do excuciating stuff like: try: import __future__ except: import real_old_fangled_code as guacamole else: if hasattr(__future__, "nested_scopes"): import new_fangled_code as guacamole else: import old_fangled_code as guacamole but in such a case I expect I'd be much happier just keying off sys.hexversion, or, even better, running a tiny inline test case to *see* what the semantics are. [Jeremy] >> Presumably all versions of Python after and including 2.1 will know >> about __future__. In those cases, the compiler will complain if >> feature is no defined. The complaint can be fairly specific: >> "__future__ feature curly_braces is not defined." [back to Thomas] > Will this be a warning, or an error/exception ? A compile-time exception: when you're asking for semantics the compiler can't give you, the presumption has to favor that you're in big trouble. You can't catch such an exception directly in the same module (because it occurs at compile time), but can catch it if you import the module from elsewhere. But I *suspect* you're trying to solve a problem this stuff isn't intended to address, which is why a specific example would really help. From tim.one at home.com Sat Feb 24 08:54:40 2001 From: tim.one at home.com (Tim Peters) Date: Sat, 24 Feb 2001 02:54:40 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: Message-ID: [Tim] > ... > A compile-time exception: when you're asking for semantics the compiler > can't give you, the presumption has to favor that you're in big trouble. > You can't catch such an exception directly in the same module (because it > occurs at compile time), but can catch it if you import the module from > elsewhere. Relatedly, you could do: try: compile("from __future__ import whatever", "", "exec") except whatever2: whatever3 else: whatever4 Then the future_stmt's compile-time is your module's runtime. still-looks-pretty-useless-to-me-though-ly y'rs - tim From guido at digicool.com Sat Feb 24 17:44:54 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 24 Feb 2001 11:44:54 -0500 Subject: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!) In-Reply-To: Your message of "Sat, 24 Feb 2001 04:34:23 +0100." <20010224043423.F16781@xs4all.nl> References: <14998.58483.298372.165614@w221.z064000254.bwi-md.dsl.cnc.net> <14998.60331.444051.552208@w221.z064000254.bwi-md.dsl.cnc.net> <20010224002651.D16781@xs4all.nl> <14998.62104.55786.683789@w221.z064000254.bwi-md.dsl.cnc.net> <20010224043423.F16781@xs4all.nl> Message-ID: <200102241644.LAA03659@cj20424-a.reston1.va.home.com> > Right, I don't really want them in if/else blocks, you're right. Try/except > would be nice, though. Can't allow that. See Tim's draft PEP; allowing tis makes the meaning too muddy. I suppose you want this because you think you may have code that wants to use a new feature when it exists, but which should still work when it doesn't. The solution, given the constraints on the placement of the __future__ import, is to place the code that uses the new feature in a separate module and have another separate module that does not use the new feature; then a parent module can try to import the first one and if that fails, import the second one. But I bet that in most cases you'll be better off coding without dependence on the new feature if your code needs to be backwards compatible! --Guido van Rossum (home page: http://www.python.org/~guido/) > > Presumably all versions of Python after and including 2.1 will know > > about __future__. In those cases, the compiler will complain if > > feature is no defined. The complaint can be fairly specific: > > "__future__ feature curly_braces is not defined." > > Will this be a warning, or an error/exception ? Error of course. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Sat Feb 24 17:54:27 2001 From: guido at digicool.com (Guido van Rossum) Date: Sat, 24 Feb 2001 11:54:27 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: Your message of "Sat, 24 Feb 2001 00:51:57 EST." References: Message-ID: <200102241654.LAA03687@cj20424-a.reston1.va.home.com> > Since a module M containing a future_statement naming feature F > explicitly requests that the current release act like a future release > with respect to F, any code interpreted dynamically from an eval, exec > or execfile executed by M will also use the new syntax or semantics > associated with F. This means that a run-time flag must be available for inspection by eval() and execfile(), at least. I'm not sure that I agree with this for execfile() though -- that's often used by mechanisms that emulate module imports, and there it would be better if it started off with all features reset to their default. I'm also not sure about exec and eval() -- it all depends on the reason why exec is being invoked. Plus, exec and eval() also take a compiled code object, and there it's too late to change the future. Which leads to the question: should compile() also inherit the future settings? It's certainly a lot easier to implement if exec c.s. are *not* affected by the future selection of the invoking environment. And if you *want* it, at least for exec, you can insert the future_statement in front of the executed code string. > Interactive shells may pose special problems. The intent is that a > future_statement typed at an interactive shell prompt affect all code > typed to that shell for the remaining life of the shell session. It's > not clear how to achieve that. The same flag that I mentioned above can be used here -- basically, we can treat each interactive command as an "exec". Except that this time, the command that is the future_statement *does* export its flag to the invoking environment. Plus, I've made a good case against the flag. :-( --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Sun Feb 25 23:44:09 2001 From: tim.one at home.com (Tim Peters) Date: Sun, 25 Feb 2001 17:44:09 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: <200102241654.LAA03687@cj20424-a.reston1.va.home.com> Message-ID: [Tim] > Since a module M containing a future_statement naming feature F > explicitly requests that the current release act like a > future release with respect to F, any code interpreted dynamically > from an eval, exec or execfile executed by M will also use the > new syntax or semantics associated with F. [Guido] > This means that a run-time flag must be available for inspection by > eval() and execfile(), at least. eval(), compile() and input() too. Others? > I'm not sure that I agree with this for execfile() though -- that's > often used by mechanisms that emulate module imports, and there it > would be better if it started off with all features reset to their > default. Code emulating module imports is rare. People writing such mechanisms had better be experts! I don't want to warp the normal case to cater to a handful of deep-magic propeller-heads (they can take care of themselves). > I'm also not sure about exec and eval() -- it all depends on the > reason why exec is being invoked. We're not mind-readers, though. Best to give a simple (to understand) rule that caters to normal cases and let the experts worm around the cases where they didn't mean what they said; e.g., if for some reason they want their entire module to use nested scopes *except* for execfile, they can move the execfile into another module and not ask for nested scopes at the top of the latter, then call the latter from the original module. But then they're no longer getting a test of what's going to happen when nested scopes become The Rule, either. Note too that this mechanism is intended to be used for more than just the particular case of nested scopes. For example, consider changing the meaning of integer division. If someone asks for that, then of course they want exec "i = 1/2\n" or eval("1/2") within the module not to compute 0. There is no mechanism in the PEP now to make life easier for people who don't really want what they asked for. Perhaps there should be. But if you believe (as I intended) that the PEP is aimed at making it easier to prepare code for a future release, all-or-nothing for a module is really the right behavior. > Plus, exec and eval() also take a compiled code object, and there it's > too late to change the future. That's OK; the PEP *intended* to restrict this to cases where the gimmicks in question also compile the code from strings. I'll change that. > Which leads to the question: should compile() also inherit the future > settings? If it doesn't, the module containing it is not going to act like it will in the MandatoryRelease associated with the __future__ requested. And in that case, I don't believe __future__ would be doing its primary job: it's not helping me find out how the module *will* act. > It's certainly a lot easier to implement if exec c.s. are *not* > affected by the future selection of the invoking environment. And if > you *want* it, at least for exec, you can insert the future_statement > in front of the executed code string. But not for eval() (see above), or input(). >> Interactive shells may pose special problems. The intent is that a >> future_statement typed at an interactive shell prompt affect all code >> typed to that shell for the remaining life of the shell session. It's >> not clear how to achieve that. > The same flag that I mentioned above can be used here -- basically, we > can treat each interactive command as an "exec". Except that this > time, the command that is the future_statement *does* export its flag > to the invoking environment. Plus, I've made a good case against the > flag. :-( I think you've pointed out that *sometimes* people may not want what it does, and that implementing it is harder than not implementing it. I favor making the rules as easy as possible for people who want to know how their module will behave after the feature is mandatory, and believe that all-or-nothing is clearly a better default. In either case, changing the default on a pick-or-choose basis within a single module would require additional gimmicks not in the current PEP (e.g., maybe more optional flags to eval() etc; or maybe some new builtin function to toggle it; or maybe more pseudo-imports; or ...). I'm not convinced more gimmicks are *needed*, though, and don't want to see this PEP bloat beyond my original intent for it. it's-a-feeble-mechanism-aimed-at-a-specific-goal-ly y'rs - tim From guido at digicool.com Mon Feb 26 04:14:13 2001 From: guido at digicool.com (Guido van Rossum) Date: Sun, 25 Feb 2001 22:14:13 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: Your message of "Sun, 25 Feb 2001 17:44:09 EST." References: Message-ID: <200102260314.WAA16873@cj20424-a.reston1.va.home.com> > Code emulating module imports is rare. People writing such mechanisms had > better be experts! I don't want to warp the normal case to cater to a > handful of deep-magic propeller-heads (they can take care of themselves). OK. I'm not completely convinced, but at least 60%, and that's enough. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Mon Feb 26 08:01:26 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 26 Feb 2001 02:01:26 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: <200102260314.WAA16873@cj20424-a.reston1.va.home.com> Message-ID: [Tim] >> Code emulating module imports is rare. People writing such >> mechanisms had better be experts! I don't want to warp the >> normal case to cater to a handful of deep-magic propeller-heads >> (they can take care of themselves). [Guido] > OK. I'm not completely convinced, but at least 60%, and that's > enough. Oh, I'm not convinced either. But eval/exec/compile/input/execfile are rare operations (in frequency of occurrence per Kline of code), and I don't want that very tangled tail wagging this dog. I don't think either of us will be wholly convinced in either direction without feedback from the beta. I *have* convinced myself tabnanny will work . But not doctest. doctest basically simulates an interactive shell session one statement at a time, and a new shell session for each docstring (not stmt). My mind simply boggles at imagining all the extra machinery that would need to be in place to make that "work" in all conceivable cases. The __future__ choices doctest itself makes should have no effects on the code it's simulating, but the code it's simulating *should* be affected by the __future__ choices of the module passed to doctest.testmod(); so, at a minimum, it would appear to require a standard way to query a module object for its set of __future__ choices, and an additional argument to compile() allowing to force that set of choices, *and* a way for doctest to tell compile() "oh, ya, if you happen to compile a __future__ statement, and I later execute the code you compiled, make that persist until I tell you to stop" (else simulated __future__ statements won't work as expected). Perhaps those are widespread needs too, but, I doubt it, and I don't think we need to solve the entire problem today regardless. From nas at arctrix.com Mon Feb 26 16:42:34 2001 From: nas at arctrix.com (nas at arctrix.com) Date: Mon, 26 Feb 2001 07:42:34 -0800 Subject: [Python-Dev] GC and Vladimir's obmalloc Message-ID: <20010226074234.A31518@glacier.fnational.com> Executive Summary: obmalloc will allow more efficient GC and we should try hard to get it into 2.1. I've finally spent some time looking at obmalloc and thinking about how to iterate the GC. The advantage would be that objects managed by obmalloc would no longer have to kept in a linked list. That saves both time and memory. I think the right way to do this is to have obmalloc kept track of two separate heaps. One would be for "atomic" objects, the other for "container" objects. I haven't yet figured out how to implement this yet. A lower level malloc interface that takes a heap structure as an argument is an obvious solution. When the GC runs it needs to find container objects. Since obmalloc only deals with blocks of 256 bytes or less, large containers would still have to be stored in a linked list. The rest can be found by searching the obmalloc container heap. Searching the heap is fairly easy. The heap is an array of pointers to pool lists. The only trick is figuring out which parts of the pools are allocated. I think adding the invariant ob_type = NULL means object not allocated is a good solution. That pointer could be set to NULL when the object is deallocated which would also be good for catching bugs. If we pay attention to pool->ref.count we don't even have to set those pointers for a newly allocated pool. Some type of GC locking will probably have to be added (we don't want the collector running when objects are in inconsistent states). I think the GC state (an int for each object) for obmalloc objects should be stored separately. Each pool header could have a pointer to an array of ints. This array could be allocated lazily when the GC runs. The advantages would be better cache behavior and less memory use if GC is disabled. Crude generational collection could be done by doing something like treating the first partially used pool in each size class as generation 0, other partially used pools and the first used pool as generation 1, and all other non-free pools as generation 2. Is the only issue with obmalloc treading? If so, what do we do to resolve this? Neil From guido at digicool.com Mon Feb 26 16:46:59 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 26 Feb 2001 10:46:59 -0500 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: Your message of "Mon, 26 Feb 2001 07:42:34 PST." <20010226074234.A31518@glacier.fnational.com> References: <20010226074234.A31518@glacier.fnational.com> Message-ID: <200102261546.KAA19326@cj20424-a.reston1.va.home.com> > Executive Summary: obmalloc will allow more efficient GC and we > should try hard to get it into 2.1. Can you do it before the 2.1b1 release? We're planning that for this Thursday, May 1st. Three days! > Is the only issue with obmalloc treading? If so, what do we do to > resolve this? 1. Yes, I think so. 2. It currently relies on the global interpreter lock. That's why we want to make it an opt-in configuration option (for now). Does that work with your proposed GC integration? --Guido van Rossum (home page: http://www.python.org/~guido/) From nas at arctrix.com Mon Feb 26 17:32:17 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 26 Feb 2001 08:32:17 -0800 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: <200102261546.KAA19326@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 26, 2001 at 10:46:59AM -0500 References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> Message-ID: <20010226083217.A31643@glacier.fnational.com> On Mon, Feb 26, 2001 at 10:46:59AM -0500, Guido van Rossum wrote: > > Executive Summary: obmalloc will allow more efficient GC and we > > should try hard to get it into 2.1. > > Can you do it before the 2.1b1 release? We're planning that for this > Thursday, May 1st. Three days! What has to be done besides applying the patch and adding a configure option? I can do that tonight if you give the green light. > > Is the only issue with obmalloc treading? If so, what do we do to > > resolve this? > > 1. Yes, I think so. 2. It currently relies on the global interpreter > lock. That's why we want to make it an opt-in configuration option > (for now). Does that work with your proposed GC integration? Opt-in is fine for now. Neil From guido at digicool.com Mon Feb 26 17:45:48 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 26 Feb 2001 11:45:48 -0500 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: Your message of "Mon, 26 Feb 2001 08:32:17 PST." <20010226083217.A31643@glacier.fnational.com> References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> Message-ID: <200102261645.LAA19732@cj20424-a.reston1.va.home.com> > On Mon, Feb 26, 2001 at 10:46:59AM -0500, Guido van Rossum wrote: > > > Executive Summary: obmalloc will allow more efficient GC and we > > > should try hard to get it into 2.1. > > > > Can you do it before the 2.1b1 release? We're planning that for this > > Thursday, May 1st. Three days! > > What has to be done besides applying the patch and adding a > configure option? I can do that tonight if you give the green > light. Sure. Green light is on, modulo objections from Barry (who technically has this assigned -- but I believe he'd be happy to let you do the honors). I thought that I read in your mail that you were proposing changes first for better GC integration -- but I must've misread that. > > > Is the only issue with obmalloc treading? If so, what do we do to > > > resolve this? > > > > 1. Yes, I think so. 2. It currently relies on the global interpreter > > lock. That's why we want to make it an opt-in configuration option > > (for now). Does that work with your proposed GC integration? > > Opt-in is fine for now. OK. So what about the optional memory profiler, on Jeremy's plate? http://sourceforge.net/tracker/index.php?func=detail&aid=401229&group_id=5470&atid=305470 I'm sure Jeremy would also love it if someone else took care of this -- he's busy with the future_statement implementation. --Guido van Rossum (home page: http://www.python.org/~guido/) From thomas at xs4all.net Mon Feb 26 17:54:53 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Mon, 26 Feb 2001 17:54:53 +0100 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: <200102261546.KAA19326@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 26, 2001 at 10:46:59AM -0500 References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> Message-ID: <20010226175453.A9678@xs4all.nl> On Mon, Feb 26, 2001 at 10:46:59AM -0500, Guido van Rossum wrote: > > Executive Summary: obmalloc will allow more efficient GC and we > > should try hard to get it into 2.1. > Can you do it before the 2.1b1 release? We're planning that for this > Thursday, May 1st. Three days! The first May 1st that falls on a Thursday is in 2003 :) I believe Moshe and I both volunteer to do the checkin should Neil not get to it for some reason. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From barry at digicool.com Mon Feb 26 17:58:49 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Mon, 26 Feb 2001 11:58:49 -0500 Subject: [Python-Dev] GC and Vladimir's obmalloc References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> <200102261645.LAA19732@cj20424-a.reston1.va.home.com> Message-ID: <15002.35657.447162.975798@anthem.wooz.org> >>>>> "GvR" == Guido van Rossum writes: GvR> Sure. Green light is on, modulo objections from Barry (who GvR> technically has this assigned -- but I believe he'd be happy GvR> to let you do the honors). No objections, and I've re-assigned the patch to Neil. At least I /think/ I have (modulo initial confusion caused by SF's new issue tracker UI :). green-means-go-ly y'rs, -Barry From mwh21 at cam.ac.uk Mon Feb 26 18:19:28 2001 From: mwh21 at cam.ac.uk (Michael Hudson) Date: 26 Feb 2001 17:19:28 +0000 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: Guido van Rossum's message of "Mon, 26 Feb 2001 11:45:48 -0500" References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> <200102261645.LAA19732@cj20424-a.reston1.va.home.com> Message-ID: Guido van Rossum writes: > So what about the optional memory profiler, on Jeremy's plate? > > http://sourceforge.net/tracker/index.php?func=detail&aid=401229&group_id=5470&atid=305470 > > I'm sure Jeremy would also love it if someone else took care of this > -- he's busy with the future_statement implementation. In a way, I think this is less important. IMO, only people with a fair amount of wizadry are going to want to use this, and telling them to go and get the patch and apply it isn't too much of a stretch (though it would help if it applied cleanly...). OTOH, obmalloc can improve performance (esp. if Neil can do his cool GC optimizations with it), and so it becomes more important to get it into 2.1 (as a prelude to turning it on by default in 2.2, right?). Just my opinion, M. -- This is the fixed point problem again; since all some implementors do is implement the compiler and libraries for compiler writing, the language becomes good at writing compilers and not much else! -- Brian Rogoff, comp.lang.functional From nas at arctrix.com Mon Feb 26 18:37:31 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 26 Feb 2001 09:37:31 -0800 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: <200102261645.LAA19732@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 26, 2001 at 11:45:48AM -0500 References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> <200102261645.LAA19732@cj20424-a.reston1.va.home.com> Message-ID: <20010226093731.A31918@glacier.fnational.com> On Mon, Feb 26, 2001 at 11:45:48AM -0500, Guido van Rossum wrote: > So what about the optional memory profiler, on Jeremy's plate? That's quite a bit lower priority in my opinion. People who need it could just apply it themselves. Also, I don't remember Vladimir saying he thought it was ready. Neil From nas at arctrix.com Mon Feb 26 18:43:26 2001 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 26 Feb 2001 09:43:26 -0800 Subject: [Python-Dev] GC and Vladimir's obmalloc In-Reply-To: <15002.35657.447162.975798@anthem.wooz.org>; from barry@digicool.com on Mon, Feb 26, 2001 at 11:58:49AM -0500 References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> <200102261645.LAA19732@cj20424-a.reston1.va.home.com> <15002.35657.447162.975798@anthem.wooz.org> Message-ID: <20010226094326.B31918@glacier.fnational.com> On Mon, Feb 26, 2001 at 11:58:49AM -0500, Barry A. Warsaw wrote: > No objections, and I've re-assigned the patch to Neil. At least I > /think/ I have (modulo initial confusion caused by SF's new issue > tracker UI :). It worked. The new tracker looks pretty cool. I like that fact that patches show up on the personalized page as well as bugs. Neil From barry at digicool.com Mon Feb 26 18:46:31 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Mon, 26 Feb 2001 12:46:31 -0500 Subject: [Python-Dev] GC and Vladimir's obmalloc References: <20010226074234.A31518@glacier.fnational.com> <200102261546.KAA19326@cj20424-a.reston1.va.home.com> <20010226083217.A31643@glacier.fnational.com> <200102261645.LAA19732@cj20424-a.reston1.va.home.com> <15002.35657.447162.975798@anthem.wooz.org> <20010226094326.B31918@glacier.fnational.com> Message-ID: <15002.38519.223964.124773@anthem.wooz.org> >>>>> "NS" == Neil Schemenauer writes: NS> It worked. The new tracker looks pretty cool. I like that NS> fact that patches show up on the personalized page as well as NS> bugs. One problem: they need to re-establish the lexical sort of `assignees' by user id. From barry at digicool.com Mon Feb 26 18:57:09 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Mon, 26 Feb 2001 12:57:09 -0500 Subject: [Python-Dev] RE: Update to PEP 232 References: <14994.53768.767065.272158@anthem.wooz.org> <000901c09bed$f861d750$f05aa8c0@lslp7o.int.lsl.co.uk> Message-ID: <15002.39157.936988.699980@anthem.wooz.org> >>>>> "TJI" == Tony J Ibbs writes: TJI> 1. Clarify the final statement - I seem to have the TJI> impression (sorry, can't find a message to back it up) that TJI> either the BDFL or Tim Peters is very against anything other TJI> than the "simple" #f.a = 1# sort of thing - unless I'm TJI> mischannelling (?) again. From pedroni at inf.ethz.ch Mon Feb 26 19:44:23 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Mon, 26 Feb 2001 19:44:23 +0100 (MET) Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) Message-ID: <200102261844.TAA09406@core.inf.ethz.ch> Hi. I have understood the point about making future feature inheritance automatic ;) So I imagine that the future features should at least end up being visible as a (writeable?) code attribute: co_futures or co_future_features being a list of feature name strings. or I'm wrong? regards, Samuele Pedroni From tim.one at home.com Mon Feb 26 20:02:42 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 26 Feb 2001 14:02:42 -0500 Subject: Draft PEP (RE: Other situations like this (was RE: [Python-Dev] Nested scopes resolution -- you can breathe again!)) In-Reply-To: <200102261844.TAA09406@core.inf.ethz.ch> Message-ID: [Samuele Pedroni] > I have understood the point about making future feature inheritance > automatic ;) > > So I imagine that the future features should at least end up being > visible as a (writeable?) code attribute: > > co_futures or co_future_features > > being a list of feature name strings. > > or I'm wrong? I don't know. Toward what end? I expect that for beta1, none of the automagic inheritance stuff will actually get implemented, and we're off to the Python conference next week, so there's time to flesh out what the next step *should* be. From skip at mojam.com Mon Feb 26 21:30:58 2001 From: skip at mojam.com (Skip Montanaro) Date: Mon, 26 Feb 2001 14:30:58 -0600 (CST) Subject: [Python-Dev] editing FAQ? Message-ID: <15002.48386.689975.913306@beluga.mojam.com> Seems like maybe the FAQ needs some touchup. Is it still under the control of the FAQ wizard (what's the password)? If not, is it in CVS somewhere? Skip From tim.one at home.com Mon Feb 26 21:34:27 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 26 Feb 2001 15:34:27 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: <15002.48386.689975.913306@beluga.mojam.com> Message-ID: [Skip Montanaro] > Seems like maybe the FAQ needs some touchup. Is it still under > the control of the FAQ wizard (what's the password)? The password is Spam case-sensitive-ly y'rs - tim From Greg.Wilson at baltimore.com Tue Feb 27 00:23:51 2001 From: Greg.Wilson at baltimore.com (Greg Wilson) Date: Mon, 26 Feb 2001 18:23:51 -0500 Subject: [Python-Dev] first correct explanation wins a beer... Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1ABF07@nsamcanms1.ca.baltimore.com> ...or the caffeinated beverage of your choice, collectable at IPC9. I'm running on a straightforward Linux box: $ uname -a Linux akbar.nevex.com 2.2.16 #3 Mon Aug 14 14:43:46 EDT 2000 i686 unknown with Python 2.1, built fresh from today's repo: $ python Python 2.1a2 (#2, Feb 26 2001, 15:27:11) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 I have one tiny script called "tryout.py": $ cat tryout.py print "We made it!" and a small HTML file called "printing.html": $ cat printing.html
We made it!
The idea is that my little SAX handler will look for "pre" elements with "prog" attributes, re-run the appropriate script, and compare the output with what's in the HTML page (it's an example for the class). The problem is that "popen2" doesn't work as expected when called from within a SAX content handler, even though it works just fine when called from a method of another class, or on its own. The whole script is: $ cat repy #!/usr/bin/env python import sys from os import popen2 from xml.sax import parse, ContentHandler class JustAClass: def method(self, progName): shellCmd = "python " + progName print "using just a class, shell command is '" + shellCmd + "'" inp, outp = popen2(shellCmd) inp.close() print "using just a class, result is", outp.readlines() class UsingSax(ContentHandler): def startElement(self, name, attrs): if name == "pre": shellCmd = "python " + attrs["prog"] print "using SAX, shell command is '" + shellCmd + "'" inp, outp = popen2(shellCmd) inp.close() print "using SAX, result is", outp.readlines() if __name__ == "__main__": # Run it directly inp, outp = popen2("python tryout.py") inp.close() print "Running popen2 directly, result is", outp.readlines() # Use a plain old class JustAClass().method("tryout.py") # Using SAX input = open("printing.html", 'r') parse(input, UsingSax()) input.close() The output is: $ python repy Running popen2 directly, result is ['We made it!\n'] using just a class, shell command is 'python tryout.py' using just a class, result is ['We made it!\n'] using SAX, shell command is 'python tryout.py' using SAX, result is [] My system has a stock 1.5.2 in /usr/bin/python, but my path is: $ echo $PATH /home/gvwilson/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/sbin:/ho me/gnats/bin so that I get the 2.1 version: $ which python /home/gvwilson/bin/python My PYTHONPATH is set up properly as well (I think): $ echo $PYTHONPATH /home/gvwilson/lib/python2.1:/home/gvwilson/lib/python2.1/lib-dynload I'm using PyXML-0.6.4, built fresh from the .tar.gz source today. So, like I said --- a beer or coffee to the first person who can explain what's up. I'm attaching the Python scripts, the HTML file, and a verbose strace output from my machine. Thanks, Greg < > < > < > < > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: repy Type: application/octet-stream Size: 1068 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: strace.txt URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tryout.py Type: application/octet-stream Size: 20 bytes Desc: not available URL: From paulp at ActiveState.com Tue Feb 27 00:42:38 2001 From: paulp at ActiveState.com (Paul Prescod) Date: Mon, 26 Feb 2001 15:42:38 -0800 Subject: [Python-Dev] first correct explanation wins a beer... References: <930BBCA4CEBBD411BE6500508BB3328F1ABF07@nsamcanms1.ca.baltimore.com> Message-ID: <3A9AE9EE.EBB27F89@ActiveState.com> My guess: Unicode. Try casting to an 8-bit string and see what happens. -- Vote for Your Favorite Python & Perl Programming Accomplishments in the first Active Awards! http://www.ActiveState.com/Awards From tim.one at home.com Tue Feb 27 02:18:37 2001 From: tim.one at home.com (Tim Peters) Date: Mon, 26 Feb 2001 20:18:37 -0500 Subject: [Python-Dev] PEP 236: Back to the __future__ Message-ID: The text of this PEP can also be found online, at: http://python.sourceforge.net/peps/pep-0236.html PEP: 236 Title: Back to the __future__ Version: $Revision: 1.2 $ Author: Tim Peters Python-Version: 2.1 Status: Active Type: Standards Track Created: 26-Feb-2001 Post-History: 26-Feb-2001 Motivation From time to time, Python makes an incompatible change to the advertised semantics of core language constructs, or changes their accidental (implementation-dependent) behavior in some way. While this is never done capriciously, and is always done with the aim of improving the language over the long term, over the short term it's contentious and disrupting. The "Guidelines for Language Evolution" PEP [1] suggests ways to ease the pain, and this PEP introduces some machinery in support of that. The "Statically Nested Scopes" PEP [2] is the first application, and will be used as an example here. Intent [Note: This is policy, and so should eventually move into PEP 5[1]] When an incompatible change to core language syntax or semantics is being made: 1. The release C that introduces the change does not change the syntax or semantics by default. 2. A future release R is identified in which the new syntax or semantics will be enforced. 3. The mechanisms described in the "Warning Framework" PEP [3] are used to generate warnings, whenever possible, about constructs or operations whose meaning may[4] change in release R. 4. The new future_statement (see below) can be explicitly included in a module M to request that the code in module M use the new syntax or semantics in the current release C. So old code continues to work by default, for at least one release, although it may start to generate new warning messages. Migration to the new syntax or semantics can proceed during that time, using the future_statement to make modules containing it act as if the new syntax or semantics were already being enforced. Note that there is no need to involve the future_statement machinery in new features unless they can break existing code; fully backward- compatible additions can-- and should --be introduced without a corresponding future_statement. Syntax A future_statement is simply a from/import statement using the reserved module name __future__: future_statement: "from" "__future__" "import" feature ["as" name] ("," feature ["as" name])* feature: identifier name: identifier In addition, all future_statments must appear near the top of the module. The only lines that can appear before a future_statement are: + The module docstring (if any). + Comments. + Blank lines. + Other future_statements. Example: """This is a module docstring.""" # This is a comment, preceded by a blank line and followed by # a future_statement. from __future__ import nested_scopes from math import sin from __future__ import alabaster_weenoblobs # compile-time error! # That was an error because preceded by a non-future_statement. Semantics A future_statement is recognized and treated specially at compile time: changes to the semantics of core constructs are often implemented by generating different code. It may even be the case that a new feature introduces new incompatible syntax (such as a new reserved word), in which case the compiler may need to parse the module differently. Such decisions cannot be pushed off until runtime. For any given release, the compiler knows which feature names have been defined, and raises a compile-time error if a future_statement contains a feature not known to it[5]. The direct runtime semantics are the same as for any import statement: there is a standard module __future__.py, described later, and it will be imported in the usual way at the time the future_statement is executed. The *interesting* runtime semantics depend on the specific feature(s) "imported" by the future_statement(s) appearing in the module. Note that there is nothing special about the statement: import __future__ [as name] That is not a future_statement; it's an ordinary import statement, with no special semantics or syntax restrictions. Example Consider this code, in file scope.py: x = 42 def f(): x = 666 def g(): print "x is", x g() f() Under 2.0, it prints: x is 42 Nested scopes[2] are being introduced in 2.1. But under 2.1, it still prints x is 42 and also generates a warning. In 2.2, and also in 2.1 *if* "from __future__ import nested_scopes" is included at the top of scope.py, it prints x is 666 Standard Module __future__.py Lib/__future__.py is a real module, and serves three purposes: 1. To avoid confusing existing tools that analyze import statements and expect to find the modules they're importing. 2. To ensure that future_statements run under releases prior to 2.1 at least yield runtime exceptions (the import of __future__ will fail, because there was no module of that name prior to 2.1). 3. To document when incompatible changes were introduced, and when they will be-- or were --made mandatory. This is a form of executable documentation, and can be inspected programatically via importing __future__ and examining its contents. Each statment in __future__.py is of the form: FeatureName = ReleaseInfo ReleaseInfo is a pair of the form: (OptionalRelease, MandatoryRelease) where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples of the same form as sys.version_info: (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int PY_MINOR_VERSION, # the 1; an int PY_MICRO_VERSION, # the 0; an int PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string PY_RELEASE_SERIAL # the 3; an int ) OptionalRelease records the first release in which from __future__ import FeatureName was accepted. In the case of MandatoryReleases that have not yet occurred, MandatoryRelease predicts the release in which the feature will become part of the language. Else MandatoryRelease records when the feature became part of the language; in releases at or after that, modules no longer need from __future__ import FeatureName to use the feature in question, but may continue to use such imports. MandatoryRelease may also be None, meaning that a planned feature got dropped. No line will ever be deleted from __future__.py. Example line: nested_scopes = (2, 1, 0, "beta", 1), (2, 2, 0, "final", 0) This means that from __future__ import nested_scopes will work in all releases at or after 2.1b1, and that nested_scopes are intended to be enforced starting in release 2.2. Unresolved Problems: Runtime Compilation Several Python features can compile code during a module's runtime: 1. The exec statement. 2. The execfile() function. 3. The compile() function. 4. The eval() function. 5. The input() function. Since a module M containing a future_statement naming feature F explicitly requests that the current release act like a future release with respect to F, any code compiled dynamically from text passed to one of these from within M should probably also use the new syntax or semantics associated with F. This isn't always desired, though. For example, doctest.testmod(M) compiles examples taken from strings in M, and those examples should use M's choices, not necessarily doctest module's choices. It's unclear what to do about this. The initial release (2.1b1) is likely to ignore these issues, saying that each dynamic compilation starts over from scratch (i.e., as if no future_statements had been specified). In any case, a future_statement appearing "near the top" (see Syntax above) of text compiled dynamically by an exec, execfile() or compile() applies to the code block generated, but has no further effect on the module that executes such an exec, execfile() or compile(). This can't be used to affect eval() or input(), however, because they only allow expression input, and a future_statement is not an expression. Unresolved Problems: Interactive Shells An interactive shell can be seen as an extreme case of runtime compilation (see above): in effect, each statement typed at an interactive shell prompt runs a new instance of exec, compile() or execfile(). The initial release (2.1b1) is likely to be such that future_statements typed at an interactive shell have no effect beyond their runtime meaning as ordinary import statements. It would make more sense if a future_statement typed at an interactive shell applied to the rest of the shell session's life, as if the future_statement had appeared at the top of a module. Again, it's unclear what to do about this. Questions and Answers Q: What about a "from __past__" version, to get back *old* behavior? A: Outside the scope of this PEP. Seems unlikely to the author, though. Write a PEP if you want to pursue it. Q: What about incompatibilites due to changes in the Python virtual machine? A: Outside the scope of this PEP, although PEP 5[1] suggests a grace period there too, and the future_statement may also have a role to play there. Q: What about incompatibilites due to changes in Python's C API? A: Outside the scope of this PEP. Q: I want to wrap future_statements in try/except blocks, so I can use different code depending on which version of Python I'm running. Why can't I? A: Sorry! try/except is a runtime feature; future_statements are primarily compile-time gimmicks, and your try/except happens long after the compiler is done. That is, by the time you do try/except, the semantics in effect for the module are already a done deal. Since the try/except wouldn't accomplish what it *looks* like it should accomplish, it's simply not allowed. We also want to keep these special statements very easy to find and to recognize. Note that you *can* import __future__ directly, and use the information in it, along with sys.version_info, to figure out where the release you're running under stands in relation to a given feature's status. Q: Going back to the nested_scopes example, what if release 2.2 comes along and I still haven't changed my code? How can I keep the 2.1 behavior then? A: By continuing to use 2.1, and not moving to 2.2 until you do change your code. The purpose of future_statement is to make life easier for people who keep keep current with the latest release in a timely fashion. We don't hate you if you don't, but your problems are much harder to solve, and somebody with those problems will need to write a PEP addressing them. future_statement is aimed at a different audience. Copyright This document has been placed in the public domain. References and Footnotes [1] http://python.sourceforge.net/peps/pep-0005.html [2] http://python.sourceforge.net/peps/pep-0227.html [3] http://python.sourceforge.net/peps/pep-0230.html [4] Note that this is "may" and not "will": better safe than sorry. Of course spurious warnings won't be generated when avoidable with reasonable cost. [5] This ensures that a future_statement run under a release prior to the first one in which a given feature is known (but >= 2.1) will raise a compile-time error rather than silently do a wrong thing. If transported to a release prior to 2.1, a runtime error will be raised because of the failure to import __future__ (no such module existed in the standard distribution before the 2.1 release, and the double underscores make it a reserved name). Local Variables: mode: indented-text indent-tabs-mode: nil End: From martin at loewis.home.cs.tu-berlin.de Tue Feb 27 07:52:27 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Tue, 27 Feb 2001 07:52:27 +0100 Subject: [Python-Dev] first correct explanation wins a beer... Message-ID: <200102270652.f1R6qRA00896@mira.informatik.hu-berlin.de> > My guess: Unicode. Try casting to an 8-bit string and see what happens. Paul is right, so I guess you owe him a beer... To see this in more detail, compare popen2.Popen3("/bin/ls").fromchild.readlines() to popen2.Popen3(u"/bin/ls").fromchild.readlines() Specifically, it seems the problem is def _run_child(self, cmd): if type(cmd) == type(''): cmd = ['/bin/sh', '-c', cmd] in popen2. I still think there should be types.isstring function, and then this fragment should read def _run_child(self, cmd): if types.isstring(cmd): cmd = ['/bin/sh', '-c', cmd] Now, if somebody would put "funny characters" into cmd, it would still give an error, which is then almost silently ignored, due to the try: os.execvp(cmd[0], cmd) finally: os._exit(1) fragment. Perhaps it would be better to put if type(cmd) == types.UnicodeType: cmd = cmd.encode("ascii") into Popen3.__init__, so you'd get an error if you pass those funny characters. Regards, Martin From ping at lfw.org Tue Feb 27 08:52:28 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Mon, 26 Feb 2001 23:52:28 -0800 (PST) Subject: [Python-Dev] pydoc for 2.1b1? Message-ID: Hi! It's my birthday today, and i think it would be a really awesome present if pydoc.py were to be accepted into the distribution. :) (Not just because it's my birthday, though. I would hope it is worth accepting based on its own merits.) The most recent version of pydoc is just a single file, for the easiest possible setup -- zero installation effort. You only need the "inspect" module to run it. You can find it under the CVS tree at nondist/sandbox/help/pydoc.py or download it from http://www.lfw.org/python/pydoc.py http://www.lfw.org/python/inspect.py Among other things, it now handles a few corner cases better, the formatting is a bit improved, and you can now tell it to write out the documentation to files on disk if that's your fancy (it can still display the documentation interactively in your shell or your web browser). -- ?!ng From ping at lfw.org Tue Feb 27 12:53:08 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 27 Feb 2001 03:53:08 -0800 (PST) Subject: [Python-Dev] A few small issues Message-ID: Hi. Here are some things i noticed tonight. 1. The error message for UnboundLocalError isn't really accurate. >>> def f(): ... x = 1 ... del x ... print x ... >>> f() Traceback (most recent call last): File " ", line 1, in ? File " ", line 4, in f UnboundLocalError: local variable 'x' referenced before assignment >>> It's not a question of the variable being referenced "before assignment" -- it's just that the variable is undefined. Better would be a straightforward message such as UnboundLocalError: local name 'x' is not defined This message would be consistent with the others: NameError: name 'x' is not defined NameError: global name 'x' is not defined 2. Why does imp.find_module('') succeed? >>> import imp >>> imp.find_module('') (None, '/home/ping/python/', ('', '', 5)) I think it should fail with "empty module name" or something similar. 3. Normally when a script is run, it looks like '' gets prepended to sys.path so that the current directory will be searched. But if the script being run is a symlink, the symlink is resolved first to an actual file, and the directory containing that file is prepended to sys.path. This leads to strange behaviour: localhost[1004]% cat > spam.py bacon = 5 localhost[1005]% cat > /tmp/eggs.py import spam localhost[1006]% ln -s /tmp/eggs.py . localhost[1007]% python eggs.py Traceback (most recent call last): File "eggs.py", line 1, in ? import spam ImportError: No module named spam localhost[1008]% python Python 2.1a2 (#23, Feb 11 2001, 16:26:17) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import spam >>> (whereupon the confused programmer says, "Huh? If *i* could import spam, why couldn't eggs?"). Was this a design decision? Should it be changed to always prepend ''? 4. As far as i can tell, the curses.wrapper package is inaccessible. It's obscured by a curses.wrapper() function in the curses package. >>> import curses.wrapper >>> curses.wrapper >>> import sys >>> sys.modules['curses.wrapper'] I don't see any way around this other than renaming curses.wrapper. -- ?!ng "If I have not seen as far as others, it is because giants were standing on my shoulders." -- Hal Abelson From thomas at xs4all.net Tue Feb 27 14:10:20 2001 From: thomas at xs4all.net (Thomas Wouters) Date: Tue, 27 Feb 2001 14:10:20 +0100 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: ; from ping@lfw.org on Mon, Feb 26, 2001 at 11:52:28PM -0800 References: Message-ID: <20010227141020.B9678@xs4all.nl> On Mon, Feb 26, 2001 at 11:52:28PM -0800, Ka-Ping Yee wrote: > It's my birthday today, and i think it would be a really awesome > present if pydoc.py were to be accepted into the distribution. :) It has my vote ;) I think pydoc serves two purposes: it's a useful tool, especially if we can get it accepted by the larger community (get it mentioned on python-list by non-dev'ers, get it mentioned in books, etc.) And it serves as a great example on how to do things like introspection. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From guido at digicool.com Tue Feb 27 03:08:36 2001 From: guido at digicool.com (Guido van Rossum) Date: Mon, 26 Feb 2001 21:08:36 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: Your message of "Mon, 26 Feb 2001 23:52:28 PST." References: Message-ID: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> > It's my birthday today, and i think it would be a really awesome > present if pydoc.py were to be accepted into the distribution. :) Congratulations, Ping. > (Not just because it's my birthday, though. I would hope it is > worth accepting based on its own merits.) No, it's being accepted because your name is Ping. I just read the first few pages of the script for Monty Python's Meaning of Life, which figures a "machine that goes 'Ping'". That makes your name sufficiently Pythonic. > The most recent version of pydoc is just a single file, for the > easiest possible setup -- zero installation effort. You only need > the "inspect" module to run it. You can find it under the CVS tree > at nondist/sandbox/help/pydoc.py or download it from > > http://www.lfw.org/python/pydoc.py > http://www.lfw.org/python/inspect.py > > Among other things, it now handles a few corner cases better, the > formatting is a bit improved, and you can now tell it to write out > the documentation to files on disk if that's your fancy (it can > still display the documentation interactively in your shell or your > web browser). You can check these into the regular tree. I guess they both go into the Lib directory, right? Make sure pydoc.py is checked in with +x permissions. I'll see if we can import pydoc.help into __builtin__ in interactive mode. Now let's paaaartaaaay! --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin at mems-exchange.org Tue Feb 27 16:02:28 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 10:02:28 -0500 Subject: [Python-Dev] A few small issues In-Reply-To: ; from ping@lfw.org on Tue, Feb 27, 2001 at 03:53:08AM -0800 References: Message-ID: <20010227100228.A17362@ute.cnri.reston.va.us> On Tue, Feb 27, 2001 at 03:53:08AM -0800, Ka-Ping Yee wrote: >4. As far as i can tell, the curses.wrapper package is inaccessible. > It's obscured by a curses.wrapper() function in the curses package. The function in the packages results from 'from curses.wrapper import wrapper', so there's really no need to import curses.wrapper directly. Hmmm... but the module is documented in the library reference. I could move the definition of wrapper() into the __init__.py and change the docs, if that's desired. --amk From skip at mojam.com Tue Feb 27 16:48:14 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 09:48:14 -0600 (CST) Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <20010227141020.B9678@xs4all.nl> References: <20010227141020.B9678@xs4all.nl> Message-ID: <15003.52286.800752.317549@beluga.mojam.com> Thomas> [pydoc] has my vote ;) Mine too. S From akuchlin at mems-exchange.org Tue Feb 27 16:59:32 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 10:59:32 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <200102270208.VAA01410@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Feb 26, 2001 at 09:08:36PM -0500 References: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> Message-ID: <20010227105932.C17362@ute.cnri.reston.va.us> On Mon, Feb 26, 2001 at 09:08:36PM -0500, Guido van Rossum wrote: >You can check these into the regular tree. I guess they both go into >the Lib directory, right? Make sure pydoc.py is checked in with +x >permissions. I'll see if we can import pydoc.help into __builtin__ in >interactive mode. What about installing it as a script, into /bin, so it's also available at the command line? The setup.py script could do it, or the Makefile could handle it. --amk From skip at mojam.com Tue Feb 27 17:00:12 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 10:00:12 -0600 (CST) Subject: [Python-Dev] editing FAQ? In-Reply-To: References: <15002.48386.689975.913306@beluga.mojam.com> Message-ID: <15003.53004.840361.997254@beluga.mojam.com> Tim> [Skip Montanaro] >> Seems like maybe the FAQ needs some touchup. Is it still under the >> control of the FAQ wizard (what's the password)? Tim> The password is Tim> Spam Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I barking up the wrong virtual tree? Skip From tim.one at home.com Tue Feb 27 17:23:23 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 11:23:23 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: <15003.53004.840361.997254@beluga.mojam.com> Message-ID: [Skip Montanaro] > Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I barking up > the wrong virtual tree? Should work; agree it doesn't; have reported it to webmaster. From tim.one at home.com Tue Feb 27 17:46:21 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 11:46:21 -0500 Subject: [Python-Dev] A few small issues In-Reply-To: Message-ID: [Ka-Ping Yee] > Hi. Here are some things i noticed tonight. Ping (& everyone else), please submit bugs on SourceForge. Python-Dev is a black hole for this kind of thing: if nobody addresses your reports RIGHT NOW (unlikely in a release week), they'll be lost forever. From guido at digicool.com Tue Feb 27 06:04:28 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:04:28 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: Your message of "Tue, 27 Feb 2001 10:59:32 EST." <20010227105932.C17362@ute.cnri.reston.va.us> References: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> <20010227105932.C17362@ute.cnri.reston.va.us> Message-ID: <200102270504.AAA02105@cj20424-a.reston1.va.home.com> > On Mon, Feb 26, 2001 at 09:08:36PM -0500, Guido van Rossum wrote: > >You can check these into the regular tree. I guess they both go into > >the Lib directory, right? Make sure pydoc.py is checked in with +x > >permissions. I'll see if we can import pydoc.help into __builtin__ in > >interactive mode. > > What about installing it as a script, into /bin, so it's also > available at the command line? The setup.py script could do it, or > the Makefile could handle it. Sounds like a good idea. (Maybe idle can also be installed if Tk is found.) Go for it. --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Tue Feb 27 06:05:03 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:05:03 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: Your message of "Tue, 27 Feb 2001 10:00:12 CST." <15003.53004.840361.997254@beluga.mojam.com> References: <15002.48386.689975.913306@beluga.mojam.com> <15003.53004.840361.997254@beluga.mojam.com> Message-ID: <200102270505.AAA02119@cj20424-a.reston1.va.home.com> > Tim> The password is > > Tim> Spam > > Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I barking up > the wrong virtual tree? Try again. I've rebooted the server. --Guido van Rossum (home page: http://www.python.org/~guido/) From skip at mojam.com Tue Feb 27 18:10:43 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 11:10:43 -0600 (CST) Subject: [Python-Dev] The more I think about __all__ ... Message-ID: <15003.57235.144454.826610@beluga.mojam.com> ... the more I think I should just yank out all those definitions. I've already been bitten by an incomplete __all__ list. I think the only people who can realistically create them are the authors of the modules. In addition, maintaining them is going to be as difficult as keeping any other piece of documentation up-to-date. Any other thoughts? BDFL - would you care to pronounce? Skip From skip at mojam.com Tue Feb 27 18:19:23 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 11:19:23 -0600 (CST) Subject: [Python-Dev] editing FAQ? In-Reply-To: <200102270505.AAA02119@cj20424-a.reston1.va.home.com> References: <15002.48386.689975.913306@beluga.mojam.com> <15003.53004.840361.997254@beluga.mojam.com> <200102270505.AAA02119@cj20424-a.reston1.va.home.com> Message-ID: <15003.57755.361084.441490@beluga.mojam.com> >> Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I >> barking up the wrong virtual tree? Guido> Try again. I've rebooted the server. Okay, progress has been made. The above URL yielded a 404 error. Obviously I guessed the wrong URL for the faqwiz interface. I did eventually find it at http://www.python.org/cgi-bin/faqw.py Thanks, Skip From guido at digicool.com Tue Feb 27 06:31:02 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:31:02 -0500 Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: Your message of "Tue, 27 Feb 2001 11:10:43 CST." <15003.57235.144454.826610@beluga.mojam.com> References: <15003.57235.144454.826610@beluga.mojam.com> Message-ID: <200102270531.AAA02301@cj20424-a.reston1.va.home.com> > ... the more I think I should just yank out all those definitions. I've > already been bitten by an incomplete __all__ list. I think the only people > who can realistically create them are the authors of the modules. In > addition, maintaining them is going to be as difficult as keeping any other > piece of documentation up-to-date. > > Any other thoughts? BDFL - would you care to pronounce? I've always been lukewarm about the desire to add __all__ to every module under the sun. But i'm also lukewarm about ripping it all out now that it's done. So, no pronouncement from me unless I get more feedback on how harmful it's been so far. Sorry... --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Tue Feb 27 18:26:34 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 12:26:34 -0500 (EST) Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: <200102270531.AAA02301@cj20424-a.reston1.va.home.com> References: <15003.57235.144454.826610@beluga.mojam.com> <200102270531.AAA02301@cj20424-a.reston1.va.home.com> Message-ID: <15003.58186.586724.972984@w221.z064000254.bwi-md.dsl.cnc.net> It seems to be to be a compatibility issue. If a module has an __all__, then from module import * may behave differently in Python 2.1 than it did in Python 2.0. The only problem of this sort I have encountered is with pickle, but I seldom use import *. The problem ends up being obscure to debug because you get a NameError. Then you hunt around in the middle and see that the name is never bound. Then you see that there is an import * -- and hopefully only one! Then you think, "Didn't Python grow __all__ enforcement in 2.1?" And you start hunting around for that name in the import module and check to see if it's in __all__. Jeremy From guido at digicool.com Tue Feb 27 06:48:05 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 00:48:05 -0500 Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: Your message of "Tue, 27 Feb 2001 12:26:34 EST." <15003.58186.586724.972984@w221.z064000254.bwi-md.dsl.cnc.net> References: <15003.57235.144454.826610@beluga.mojam.com> <200102270531.AAA02301@cj20424-a.reston1.va.home.com> <15003.58186.586724.972984@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102270548.AAA02442@cj20424-a.reston1.va.home.com> > It seems to be to be a compatibility issue. If a module has an > __all__, then from module import * may behave differently in Python > 2.1 than it did in Python 2.0. The only problem of this sort I have > encountered is with pickle, but I seldom use import *. This suggests a compatibility test that Skip can easily write. For each module that has an __all__ in 2.1, invoke python 2.0 to see what names are imported by import * for that module in 2.0, and see if there are differences. Then look carefully at the differences and see if they are acceptable. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Tue Feb 27 19:56:24 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 13:56:24 -0500 Subject: [Python-Dev] The more I think about __all__ ... In-Reply-To: <200102270531.AAA02301@cj20424-a.reston1.va.home.com> Message-ID: [Guido van Rossum] > ... > So, no pronouncement from me unless I get more feedback on how harmful > it's been so far. Sorry... Doesn't matter much to me. There are still spurious regrtest.py failures due to it under Windows when using -r; this has to do with that importing modules that don't exist on Windows leave behind incomplete module objects that fool test___all__.py. E.g., "regrtest test_pty test___all__" on Windows yields a bizarre failure. Tried fixing that last night, but it somehow caused test_sre(!) to fail instead, and I gave up for the night. From tim.one at home.com Tue Feb 27 20:27:12 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 14:27:12 -0500 Subject: [Python-Dev] Case-sensitive import Message-ID: I'm still trying to sort this out. Some concerns and questions: I don't like the new MatchFilename, because it triggers on *all* platforms that #define HAVE_DIRENT_H. Anyone, doesn't that trigger on straight Linux systems too (all I know is that it's part of the Single UNIX Specification)? I don't like it because it implements a woefully inefficient algorithm: it cycles through the entire directory looking for a case-sensitive match. But there can be hundreds of .py files in a directory, and on average it will need to look at half of them, while if this triggers on straight Linux there's no need to look at *any* of them there. I also don't like it because it apparently triggers on Cygwin too but the code that calls it doesn't cater to that Cygwin possibly *should* be defining ALTSEP as well as SEP. Would rather dump MatchFilename and rewrite in terms of the old check_case (which should run much quicker, and already comes in several appropriate platform-aware versions -- and I clearly minimize the chance of breakage if I stick to that time-tested code). Steven, there is a "#ifdef macintosh" version of check_case already. Will that or won't that work correctly on your variant of Mac? If not, would you please supply a version that does (along with the #ifdef'ery needed to recognize your Mac variant)? Jason, I *assume* that the existing "#if defined(MS_WIN32) || defined(__CYGWIN__)" version of check_case works already for you. Scream if that's wrong. Steven and Jack, does getenv() work on both your flavors of Mac? I want to make PYTHONCASEOK work for you too. From tim.one at home.com Tue Feb 27 20:34:28 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 14:34:28 -0500 Subject: [Python-Dev] editing FAQ? In-Reply-To: Message-ID: http://www.python.org/cgi-bin/faqw.py is working again. Password is Spam. The http://www.python.org/cgi-bin/faqwiz you mentioned now yields a 404 (File Not Found). > [Skip Montanaro] >> Alas, http://www.python.org/cgi-bin/faqwiz just times out. Am I >> barking up the wrong virtual tree? > > Should work; agree it doesn't; have reported it to webmaster. > From akuchlin at mems-exchange.org Tue Feb 27 20:50:44 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 14:50:44 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <200102270504.AAA02105@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Tue, Feb 27, 2001 at 12:04:28AM -0500 References: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> <20010227105932.C17362@ute.cnri.reston.va.us> <200102270504.AAA02105@cj20424-a.reston1.va.home.com> Message-ID: <20010227145044.B29979@ute.cnri.reston.va.us> On Tue, Feb 27, 2001 at 12:04:28AM -0500, Guido van Rossum wrote: >Sounds like a good idea. (Maybe idle can also be installed if Tk is >found.) Go for it. Will do. Is there anything else in Tools/ or Lib/ that could be usefully installed, such as tabnanny or whatever? I can't think of anything that would be really burningly important, so I'll just take care of pydoc. Re: IDLE: Martin already contributed a Tools/idle/setup.py, but I'm not sure how to trigger it recursively. Perhaps a configure option --install-idle, which controls an idleinstall target in the Makefile. Making it only install if Tkinter is compiled seems icky; I don't see how to do that cleanly. Martin, any suggestions? --amk From guido at digicool.com Tue Feb 27 09:08:13 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 03:08:13 -0500 Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: Your message of "Tue, 27 Feb 2001 14:50:44 EST." <20010227145044.B29979@ute.cnri.reston.va.us> References: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> <20010227105932.C17362@ute.cnri.reston.va.us> <200102270504.AAA02105@cj20424-a.reston1.va.home.com> <20010227145044.B29979@ute.cnri.reston.va.us> Message-ID: <200102270808.DAA16485@cj20424-a.reston1.va.home.com> > On Tue, Feb 27, 2001 at 12:04:28AM -0500, Guido van Rossum wrote: > >Sounds like a good idea. (Maybe idle can also be installed if Tk is > >found.) Go for it. > > Will do. Is there anything else in Tools/ or Lib/ that could be > usefully installed, such as tabnanny or whatever? I can't think of > anything that would be really burningly important, so I'll just take > care of pydoc. Offhand, not -- idle and pydoc seem to be overwhelmingly more important to me than anything else... > Re: IDLE: Martin already contributed a Tools/idle/setup.py, but I'm > not sure how to trigger it recursively. Perhaps a configure option > --install-idle, which controls an idleinstall target in the Makefile. > Making it only install if Tkinter is compiled seems icky; I don't see > how to do that cleanly. Martin, any suggestions? I have to admit that I don't know what IDLE's setup.py does... :-( --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin at mems-exchange.org Tue Feb 27 21:55:45 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 15:55:45 -0500 Subject: [Python-Dev] Patch uploads broken Message-ID: Uploading of patches seems to be broken on SourceForge at the moment; even if you fill in the file upload form, its contents seem to just be ignored. Reported to SF as support req #404688: http://sourceforge.net/tracker/?func=detail&aid=404688&group_id=1&atid=200001 --amk From tim.one at home.com Tue Feb 27 22:15:53 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 16:15:53 -0500 Subject: [Python-Dev] New test_inspect fails under -O Message-ID: I assume this is a x-platform failure. Don't have time to look into it myself right now. C:\Code\python\dist\src\PCbuild>python -O ../lib/test/test_inspect.py Traceback (most recent call last): File "../lib/test/test_inspect.py", line 172, in ? 'trace() row 1') File "../lib/test/test_inspect.py", line 70, in test raise TestFailed, message % args test_support.TestFailed: trace() row 1 C:\Code\python\dist\src\PCbuild> From jeremy at alum.mit.edu Tue Feb 27 22:38:27 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 16:38:27 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... Message-ID: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> > In addition, all future_statments must appear near the top of the > module. The only lines that can appear before a future_statement are: > > + The module docstring (if any). > + Comments. > + Blank lines. > + Other future_statements. I would like to add another restriction: A future_statement must appear on a line by itself. It is not legal to combine a future_statement without any other statement using a semicolon. It would be a bear to implement error handling for cases like this: from __future__ import a; import b; from __future__ import c Jeremy From pedroni at inf.ethz.ch Tue Feb 27 22:54:43 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Tue, 27 Feb 2001 22:54:43 +0100 (MET) Subject: [Python-Dev] one more restriction for from __future__ import ... Message-ID: <200102272154.WAA25543@core.inf.ethz.ch> Hi. > > In addition, all future_statments must appear near the top of the > > module. The only lines that can appear before a future_statement are: > > > > + The module docstring (if any). > > + Comments. > > + Blank lines. > > + Other future_statements. > > I would like to add another restriction: > > A future_statement must appear on a line by itself. It is not > legal to combine a future_statement without any other statement > using a semicolon. > > It would be a bear to implement error handling for cases like this: > > from __future__ import a; import b; from __future__ import c Will the error be unclear for the user or there's another problem? In jython I get from parser an abstract syntax tree, so it is difficult to distringuish the ; from true newlines ;) regards, Samuele From guido at digicool.com Tue Feb 27 11:06:18 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 05:06:18 -0500 Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: Your message of "Tue, 27 Feb 2001 16:38:27 EST." <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102271006.FAA18760@cj20424-a.reston1.va.home.com> > I would like to add another restriction: > > A future_statement must appear on a line by itself. It is not > legal to combine a future_statement without any other statement > using a semicolon. > > It would be a bear to implement error handling for cases like this: > > from __future__ import a; import b; from __future__ import c Really?!? Why? Isn't it straightforward to check that everything you encounter in a left-to-right leaf scan of the parse tree is either a future statement or a docstring until you encounter a non-future? --Guido van Rossum (home page: http://www.python.org/~guido/) From akuchlin at mems-exchange.org Tue Feb 27 23:34:06 2001 From: akuchlin at mems-exchange.org (Andrew Kuchling) Date: Tue, 27 Feb 2001 17:34:06 -0500 Subject: [Python-Dev] Re: Patch uploads broken Message-ID: The SourceForge admins couldn't replicate the patch upload problem, and managed to attach a file to the Python bug report in question, yet when I try it, it still fails for me. So, a question for this list: has uploading patches or other files been working for you recently, particularly today? Maybe with more data, we can see a pattern (browser version, SSL/non-SSL, cluefulness of user, ...). If you want to try it, feel free to try attaching a file to bug #404680: https://sourceforge.net/tracker/index.php?func=detail&aid=404680&group_id=5470&atid=305470 ) The SF admin request for this problem is at http://sourceforge.net/tracker/?func=detail&atid=100001&aid=404688&group_id=1, but it's better if I collect the results and summarize them in a single comment. --amk From michel at digicool.com Tue Feb 27 23:58:56 2001 From: michel at digicool.com (Michel Pelletier) Date: Tue, 27 Feb 2001 14:58:56 -0800 (PST) Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: Message-ID: Andrew, FYI, we have seen the same problem on the SF zope-book patch tracker. I have a user who can reproduce it, like you. Would you like me to get you more info? -Michel On Tue, 27 Feb 2001, Andrew Kuchling wrote: > The SourceForge admins couldn't replicate the patch upload problem, > and managed to attach a file to the Python bug report in question, yet > when I try it, it still fails for me. So, a question for this list: > has uploading patches or other files been working for you recently, > particularly today? Maybe with more data, we can see a pattern > (browser version, SSL/non-SSL, cluefulness of user, ...). > > If you want to try it, feel free to try attaching a file to bug #404680: > https://sourceforge.net/tracker/index.php?func=detail&aid=404680&group_id=5470&atid=305470 > ) > > The SF admin request for this problem is at > http://sourceforge.net/tracker/?func=detail&atid=100001&aid=404688&group_id=1, > but it's better if I collect the results and summarize them in a > single comment. > > --amk > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > From tim.one at home.com Wed Feb 28 00:06:59 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 18:06:59 -0500 Subject: [Python-Dev] More std test breakage Message-ID: test_inspect.py still failing under -O; probably all platforms. New failure in test___all__.py, *possibly* specific to Windows, but I don't see any "termios.py" anywhere so hard to believe it could be working anywhere else either: C:\Code\python\dist\src\PCbuild>python ../lib/test/test___all__.py Traceback (most recent call last): File "../lib/test/test___all__.py", line 78, in ? check_all("getpass") File "../lib/test/test___all__.py", line 10, in check_all exec "import %s" % modname in names File " ", line 1, in ? File "c:\code\python\dist\src\lib\getpass.py", line 106, in ? import termios NameError: Case mismatch for module name termios (filename c:\code\python\dist\src\lib\TERMIOS.py) C:\Code\python\dist\src\PCbuild> From tommy at ilm.com Wed Feb 28 00:22:16 2001 From: tommy at ilm.com (Flying Cougar Burnette) Date: Tue, 27 Feb 2001 15:22:16 -0800 (PST) Subject: [Python-Dev] to whoever made the termios changes... Message-ID: <15004.13862.351574.668648@mace.lucasdigital.com> I've already deleted the check-in mail and forgot who it was! Hopefully you're listening... (Fred, maybe?) I just did a cvs update and am no getting this when compiling on irix65: cc -O -OPT:Olimit=0 -I. -I/usr/u0/tommy/pycvs/python/dist/src/./Include -IInclude/ -I/usr/local/include -c /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c -o build/temp.irix-6.5-2.1/termios.o cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 320 The identifier "B230400" is undefined. {"B230400", B230400}, ^ cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 321 The identifier "CBAUDEX" is undefined. {"CBAUDEX", CBAUDEX}, ^ cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 399 The identifier "CRTSCTS" is undefined. {"CRTSCTS", CRTSCTS}, ^ cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 432 The identifier "VSWTC" is undefined. {"VSWTC", VSWTC}, ^ 4 errors detected in the compilation of "/usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c". time for an #ifdef? From jeremy at alum.mit.edu Wed Feb 28 00:27:30 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 18:27:30 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <200102271006.FAA18760@cj20424-a.reston1.va.home.com> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> Message-ID: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GvR" == Guido van Rossum writes: >> I would like to add another restriction: >> >> A future_statement must appear on a line by itself. It is not >> legal to combine a future_statement without any other statement >> using a semicolon. >> >> It would be a bear to implement error handling for cases like >> this: >> >> from __future__ import a; import b; from __future__ import c GvR> Really?!? Why? Isn't it straightforward to check that GvR> everything you encounter in a left-to-right leaf scan of the GvR> parse tree is either a future statement or a docstring until GvR> you encounter a non-future? It's not hard to find legal future statements. It's hard to find illegal ones. The pass to find future statements exits as soon as it finds something that isn't a doc string or a future. The symbol table pass detects illegal future statements by comparing the current line number against the line number of the last legal futre statement. If a mixture of legal and illegal future statements occurs on the same line, that test fails. If I want to be more precise, I can think of a couple of ways to figure out if a particular future statement occurs after the first non-import statement. Neither is particularly pretty because the parse tree is so deep by the time you get to the import statement. One possibility is to record the index of each small_stmt that occurs as a child of a simple_stmt in the symbol table. The future statement pass can record the offset of the first non-legal small_stmt when it occurs as part of an extend simple_stmt. The symbol table would also need to record the current index of each small_stmt. To implement this, I've got to touch a lot of code. The other possibility is to record the address for the first statement following the last legal future statement. The symbol table pass could test each node it visits and set a flag when this node is visited a second time. Any future statement found when the flag is set is an error. I'm concerned that it will be difficult to guarantee that this node is always checked, because the loop that walks the tree frequently dispatches to helper functions. I think each helper function would need to test. Do you have any other ideas? I haven't though about this for more than 20 minutes and was hoping to avoid more time invested on the matter. If it's a problem for Jython, though, we'll need to figure something out. Perhaps the effect of multiple future statements on a single line could be undefined, which would allow Python to raise an error and Jython to ignore the error. Not ideal, but expedient. Jeremy From ping at lfw.org Wed Feb 28 00:34:17 2001 From: ping at lfw.org (Ka-Ping Yee) Date: Tue, 27 Feb 2001 15:34:17 -0800 (PST) Subject: [Python-Dev] pydoc for 2.1b1? In-Reply-To: <200102270208.VAA01410@cj20424-a.reston1.va.home.com> Message-ID: On Mon, 26 Feb 2001, Guido van Rossum wrote: > > No, it's being accepted because your name is Ping. Hooray! Thank you, Guido. :) > Now let's paaaartaaaay! You said it, brother. Welcome to the Year of the Snake. -- ?!ng From skip at mojam.com Wed Feb 28 00:39:02 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 17:39:02 -0600 (CST) Subject: [Python-Dev] More std test breakage In-Reply-To: References: Message-ID: <15004.14998.720791.657513@beluga.mojam.com> Tim> test_inspect.py still failing under -O; probably all platforms. Tim> New failure in test___all__.py, *possibly* specific to Windows, but Tim> I don't see any "termios.py" anywhere so hard to believe it could Tim> be working anywhere else either: ... NameError: Case mismatch for module name termios (filename c:\code\python\dist\src\lib\TERMIOS.py) Try cvs update. Lib/getpass.py shouldn't be trying to import TERMIOS anymore. The case mismatch you're seeing is because it can find the now defunct TERMIOS.py module but you obviously don't have the termios module. Skip From skip at mojam.com Wed Feb 28 00:48:04 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 17:48:04 -0600 (CST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15004.15540.643665.504819@beluga.mojam.com> Jeremy> The symbol table pass detects illegal future statements by Jeremy> comparing the current line number against the line number of the Jeremy> last legal futre statement. Why not just add a flag (default false at the start of the compilation) to the compiling struct that tells you if you've seen a future-killer statement already? Then if you see a future statement the compiler can whine. Skip From skip at mojam.com Wed Feb 28 00:56:47 2001 From: skip at mojam.com (Skip Montanaro) Date: Tue, 27 Feb 2001 17:56:47 -0600 (CST) Subject: [Python-Dev] test_symtable failing on Linux Message-ID: <15004.16063.325105.836576@beluga.mojam.com> test_symtable is failing for me: % ./python ../Lib/test/test_symtable.py Traceback (most recent call last): File "../Lib/test/test_symtable.py", line 7, in ? verify(symbols[0].name == "global") TypeError: unsubscriptable object Just cvs up'd about ten minutes ago. Skip From jeremy at alum.mit.edu Wed Feb 28 00:50:30 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 18:50:30 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <15004.15540.643665.504819@beluga.mojam.com> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> <15004.15540.643665.504819@beluga.mojam.com> Message-ID: <15004.15686.104843.418585@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "SM" == Skip Montanaro writes: Jeremy> The symbol table pass detects illegal future statements by Jeremy> comparing the current line number against the line number of Jeremy> the last legal futre statement. SM> Why not just add a flag (default false at the start of the SM> compilation) to the compiling struct that tells you if you've SM> seen a future-killer statement already? Then if you see a SM> future statement the compiler can whine. Almost everything is a future-killer statement, only doc strings and other future statements are allowed. I would have to add a st->st_future_killed = 1 for almost every node type. There are also a number of nodes (about ten) that can contain future statements or doc strings or future killers. As a result, I'd have to add special cases for them, too. Jeremy From jeremy at alum.mit.edu Wed Feb 28 00:51:37 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 18:51:37 -0500 (EST) Subject: [Python-Dev] test_symtable failing on Linux In-Reply-To: <15004.16063.325105.836576@beluga.mojam.com> References: <15004.16063.325105.836576@beluga.mojam.com> Message-ID: <15004.15753.795849.695997@w221.z064000254.bwi-md.dsl.cnc.net> This is a problem I don't know how to resolve; perhaps Andrew or Neil can. _symtablemodule.so depends on symtable.h, but setup.py doesn't know that. If you rebuild the .so, it should work. third-person-to-hit-this-problem-ly y'rs, Jeremy From greg at cosc.canterbury.ac.nz Wed Feb 28 01:01:53 2001 From: greg at cosc.canterbury.ac.nz (Greg Ewing) Date: Wed, 28 Feb 2001 13:01:53 +1300 (NZDT) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102280001.NAA02075@s454.cosc.canterbury.ac.nz> > The pass to find future statements exits as soon as it > finds something that isn't a doc string or a future. Well, don't do that, then. Have the find_future_statements pass keep going and look for *illegal* future statements as well. Then, subsequent passes can just ignore any import that looks like a future statement, because it will already have been either processed or reported as an error. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg at cosc.canterbury.ac.nz +--------------------------------------+ From sdm7g at virginia.edu Wed Feb 28 01:03:56 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Tue, 27 Feb 2001 19:03:56 -0500 (EST) Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: On Tue, 27 Feb 2001, Tim Peters wrote: > I don't like the new MatchFilename, because it triggers on *all* platforms > that #define HAVE_DIRENT_H. I mentioned this when I originally submitted the patch. The intent was that it be *able* to compile on any unix-like platform -- in particular, I was thinking LinuxPPC was the other case I could think of where someone might want to use a HFS+ filesystem - but that Darwin/MacOSX was likely to be the only system in which that was the default. > Anyone, doesn't that trigger on straight Linux systems too (all I know is > that it's part of the Single UNIX Specification)? Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a difference in the linux dirent structs. ( I'm not sure if he caught my initial statement of intent either, but then the discussion veered into whether the patch should have been accepted at all, and then into the discussion of a general solution... ) I'm not happy with the ineffeciency either, but, as I noted, I didn't expect that it would be enabled by default elsewhere when I submitted it. ( And my goal for OSX was just to have a version that builds and doesn't crash much, so searching for a more effecient solution was going to be the next project. ) > Would rather dump MatchFilename and rewrite in terms of the old check_case > (which should run much quicker, and already comes in several appropriate > platform-aware versions -- and I clearly minimize the chance of breakage if I > stick to that time-tested code). The reason I started from scratch, you might recall, was that before I understood that the Windows semantics was intended to be different, I tried adding a Mac version of check_case, and it didn't do what I wanted. But that wasn't a problem with any of the existing check_case functions, but was due to how check_case was used. > Steven, there is a "#ifdef macintosh" version of check_case already. Will > that or won't that work correctly on your variant of Mac? If not, would you > please supply a version that does (along with the #ifdef'ery needed to > recognize your Mac variant)? One problem is that I'm aiming for a version that would work on both the open source Darwin distribution ( which is mach + BSD + some Apple extensions: Objective-C, CoreFoundation, Frameworks, ... but not most of the macosx Carbon and Cocoa libraries. ) and the full MacOSX. Thus the reason for a unix only implementation -- the info may be more easily available via MacOS FSSpec's but that's not available on vanilla Darwin. ( And I can't, for the life of me, thing of an effecient unix implementation -- UNIX file system API's + HFS+ filesystem semantics may be an unfortunate mixture! ) In other words: I can rename the current version to check_case and fix the args to match. (Although, I recall that the args to check_case were rather more awkward to handle, but I'll have to look again. ) It also probably shouldn't be "#ifdef macintosh" either, but that's a thread in itself! > Steven and Jack, does getenv() work on both your flavors of Mac? I want to > make PYTHONCASEOK work for you too. getenv() works on OSX (it's the BSD unix implementation). ( I *think* that Jack has the MacPython get the variables from Pythoprefs file settings. ) -- Steve Majewski From guido at digicool.com Tue Feb 27 13:12:18 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 07:12:18 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: Your message of "Tue, 27 Feb 2001 17:34:06 EST." References: Message-ID: <200102271212.HAA19298@cj20424-a.reston1.va.home.com> > If you want to try it, feel free to try attaching a file to bug #404680: > https://sourceforge.net/tracker/index.php?func=detail&aid=404680&group_id=5470&atid=305470 > ) > > The SF admin request for this problem is at > http://sourceforge.net/tracker/?func=detail&atid=100001&aid=404688&group_id=1, > but it's better if I collect the results and summarize them in a > single comment. My conclusion: the file upload is refused iff the comment is empty -- in other words the complaint about an empty comment is coded wrongly and should only occur when the comment is empty *and* no file is uploaded. Or maybe they want you to add a comment with your file -- that's fine too, but the error isn't very clear. http or https made no difference. I used NS 4.72 on Linux; Tim used IE and had the same results. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Wed Feb 28 01:06:55 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 19:06:55 -0500 Subject: [Python-Dev] More std test breakage In-Reply-To: <15004.14998.720791.657513@beluga.mojam.com> Message-ID: > Try cvs update. Already had. > Lib/getpass.py shouldn't be trying to import TERMIOS anymore. It isn't. It's trying to import (lowercase) termios. > The case mismatch you're seeing is because it can find the now defunct > TERMIOS.py module but you obviously don't have the termios module. Indeed I do not. Ah. But it *used* to import (uppercase) TERMIOS. That makes this a Windows thing: when it tries to import termios, it still *finds* TERMIOS.py, and on Windows that raises a NameError (instead of the ImportError you'd hope to get, if you *had* to get any error at all out of mismatching case). So this should go away, and get replaced by an ImportError, when I check in the "case-sensitive import" patch for Windows. Thanks for the nudge! From guido at digicool.com Tue Feb 27 13:21:11 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 07:21:11 -0500 Subject: [Python-Dev] More std test breakage In-Reply-To: Your message of "Tue, 27 Feb 2001 18:06:59 EST." References: Message-ID: <200102271221.HAA19394@cj20424-a.reston1.va.home.com> > New failure in test___all__.py, *possibly* specific to Windows, but I don't > see any "termios.py" anywhere so hard to believe it could be working anywhere > else either: > > C:\Code\python\dist\src\PCbuild>python ../lib/test/test___all__.py > Traceback (most recent call last): > File "../lib/test/test___all__.py", line 78, in ? > check_all("getpass") > File "../lib/test/test___all__.py", line 10, in check_all > exec "import %s" % modname in names > File " ", line 1, in ? > File "c:\code\python\dist\src\lib\getpass.py", line 106, in ? > import termios > NameError: Case mismatch for module name termios > (filename c:\code\python\dist\src\lib\TERMIOS.py) > > C:\Code\python\dist\src\PCbuild> Easy. There used to be a built-in termios on Unix only, and 12 different platform-specific copies of TERMIOS.py, on Unix only. We're phasing TERMIOS.py out, mocing all the symbols into termios, and as part of that we chose to remove all the platform-dependent TERMIOS.py files with a single one, in Lib, that imports the symbols from termios, for b/w compatibility. But the code that tries to see if termios exists only catches ImportError, not NameError. You can add NameError to the except clause in getpass.py, or you can proceed with your fix to the case-sensitive imports. :-) --Guido van Rossum (home page: http://www.python.org/~guido/) From jeremy at alum.mit.edu Wed Feb 28 01:13:42 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 27 Feb 2001 19:13:42 -0500 (EST) Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: <200102280001.NAA02075@s454.cosc.canterbury.ac.nz> References: <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> <200102280001.NAA02075@s454.cosc.canterbury.ac.nz> Message-ID: <15004.17078.793539.226783@w221.z064000254.bwi-md.dsl.cnc.net> >>>>> "GE" == Greg Ewing writes: >> The pass to find future statements exits as soon as it finds >> something that isn't a doc string or a future. GE> Well, don't do that, then. Have the find_future_statements pass GE> keep going and look for *illegal* future statements as well. GE> Then, subsequent passes can just ignore any import that looks GE> like a future statement, because it will already have been GE> either processed or reported as an error. I like this idea best so far. Jeremy From guido at digicool.com Wed Feb 28 01:24:47 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:24:47 -0500 Subject: [Python-Dev] to whoever made the termios changes... In-Reply-To: Your message of "Tue, 27 Feb 2001 15:22:16 PST." <15004.13862.351574.668648@mace.lucasdigital.com> References: <15004.13862.351574.668648@mace.lucasdigital.com> Message-ID: <200102280024.TAA19492@cj20424-a.reston1.va.home.com> > I've already deleted the check-in mail and forgot who it was! > Hopefully you're listening... (Fred, maybe?) Yes, Fred. > I just did a cvs update and am no getting this when compiling on > irix65: > > cc -O -OPT:Olimit=0 -I. -I/usr/u0/tommy/pycvs/python/dist/src/./Include -IInclude/ -I/usr/local/include -c /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c -o build/temp.irix-6.5-2.1/termios.o > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 320 > The identifier "B230400" is undefined. > > {"B230400", B230400}, > ^ > > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 321 > The identifier "CBAUDEX" is undefined. > > {"CBAUDEX", CBAUDEX}, > ^ > > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 399 > The identifier "CRTSCTS" is undefined. > > {"CRTSCTS", CRTSCTS}, > ^ > > cc-1020 cc: ERROR File = /usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c, Line = 432 > The identifier "VSWTC" is undefined. > > {"VSWTC", VSWTC}, > ^ > > 4 errors detected in the compilation of "/usr/u0/tommy/pycvs/python/dist/src/Modules/termios.c". > > > > time for an #ifdef? Definitely. At least these 4; maybe for every stupid symbol we're adding... --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at digicool.com Wed Feb 28 01:29:44 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:29:44 -0500 Subject: [Python-Dev] one more restriction for from __future__ import ... In-Reply-To: Your message of "Tue, 27 Feb 2001 18:27:30 EST." <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.7763.994510.90567@w221.z064000254.bwi-md.dsl.cnc.net> <200102271006.FAA18760@cj20424-a.reston1.va.home.com> <15004.14306.265639.606235@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102280029.TAA19538@cj20424-a.reston1.va.home.com> > >> It would be a bear to implement error handling for cases like > >> this: > >> > >> from __future__ import a; import b; from __future__ import c > > GvR> Really?!? Why? Isn't it straightforward to check that > GvR> everything you encounter in a left-to-right leaf scan of the > GvR> parse tree is either a future statement or a docstring until > GvR> you encounter a non-future? > > It's not hard to find legal future statements. It's hard to find > illegal ones. The pass to find future statements exits as soon as it > finds something that isn't a doc string or a future. The symbol table > pass detects illegal future statements by comparing the current line > number against the line number of the last legal futre statement. Aha. That's what I missed -- comparison by line number. One thing you could do would simply be check the entire current simple_statement, which would catch the above example; the possibilities are limited at that level (no blocks can start on the same line after an import). --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Wed Feb 28 01:34:32 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 19:34:32 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: [Steven D. Majewski] > ... > The intent was that it be *able* to compile on any unix-like platform -- > in particular, I was thinking LinuxPPC was the other case I could > think of where someone might want to use a HFS+ filesystem - but > that Darwin/MacOSX was likely to be the only system in which that was > the default. I don't care about LinuxPPC right now. When someone steps up to champion that platform, they can deal with it then. What I am interested in is supporting the platforms we *do* have warm bodies looking at, and not regressing on any of them. I'm surprised nobody on Linux already screamed. >> Anyone, doesn't that trigger on straight Linux systems too (all I know is >> that it's part of the Single UNIX Specification)? > Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a difference in > the linux dirent structs. ( I'm not sure if he caught my initial > statement of intent either, but then the discussion veered into whether > the patch should have been accepted at all, and then into the discussion > of a general solution... ) > > I'm not happy with the ineffeciency either, but, as I noted, I didn't > expect that it would be enabled by default elsewhere when I submitted > it. I expect it's enabled everywhere the #ifdef's in the patch enabled it . But I don't care about the past either, I want to straighten it out *now*. > ( And my goal for OSX was just to have a version that builds and > doesn't crash much, so searching for a more effecient solution was > going to be the next project. ) Then this is the right time. Play along by pretending that OSX is the special case that it is <0.9 wink>. > ... > The reason I started from scratch, you might recall, was that before I > understood that the Windows semantics was intended to be different, I > tried adding a Mac version of check_case, and it didn't do what I wanted. > But that wasn't a problem with any of the existing check_case functions, > but was due to how check_case was used. check_case will be used differently now. > ... > One problem is that I'm aiming for a version that would work on both > the open source Darwin distribution ( which is mach + BSD + some Apple > extensions: Objective-C, CoreFoundation, Frameworks, ... but not most > of the macosx Carbon and Cocoa libraries. ) and the full MacOSX. > Thus the reason for a unix only implementation -- the info may be > more easily available via MacOS FSSpec's but that's not available > on vanilla Darwin. ( And I can't, for the life of me, thing of an > effecient unix implementation -- UNIX file system API's + HFS+ filesystem > semantics may be an unfortunate mixture! ) Please just solve the problem for the platforms you're actually running on -- case-insensitive filesystems are not "Unix only" in any meaningful sense of that phrase, and each not-really-Unix platform is likely to have its own stupid gimmicks for worming around this problem anyway. For example, Cygwin defers to the Windows API. Great! That solves the problem there. Generalization is premature. > In other words: I can rename the current version to check_case and > fix the args to match. (Although, I recall that the args to check_case > were rather more awkward to handle, but I'll have to look again. ) Good! I'm not going to wait for that, though. I desperately need a nap, but when I get up I'll check in changes that should be sufficient for the Windows and Cygwin parts of this, without regressing on other platforms. We'll then have to figure out whatever #ifdef'ery is needed for your platform(s). > getenv() works on OSX (it's the BSD unix implementation). So it's *kind* of like Unix after all . > ( I *think* that Jack has the MacPython get the variables from Pythoprefs > file settings. ) Haven't heard from him, but getenv() is used freely in the Python codebase elsewhere, so I figure he's got *some* way to fake it. So I'm not worried about that anymore (until Jack screams about it). From guido at digicool.com Wed Feb 28 01:35:07 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:35:07 -0500 Subject: [Python-Dev] test_symtable failing on Linux In-Reply-To: Your message of "Tue, 27 Feb 2001 18:51:37 EST." <15004.15753.795849.695997@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.16063.325105.836576@beluga.mojam.com> <15004.15753.795849.695997@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <200102280035.TAA19590@cj20424-a.reston1.va.home.com> > This is a problem I don't know how to resolve; perhaps Andrew or Neil > can. _symtablemodule.so depends on symtable.h, but setup.py doesn't > know that. If you rebuild the .so, it should work. Mayby this module shouldn't be built by setup.py; it could be added to Modules/Setup.dist (all the mechanism there still works, it just isn't used for most modules; but some are still there: posix, _sre). Then you can add a regular dependency for it to the regular Makefile. This is a weakness in general of setup.py, but rarely causes a problem because the standard Python headers are pretty stable. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Wed Feb 28 01:38:15 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 19:38:15 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: <200102271212.HAA19298@cj20424-a.reston1.va.home.com> Message-ID: [Guido] > My conclusion: the file upload is refused iff the comment is empty -- > in other words the complaint about an empty comment is coded wrongly > and should only occur when the comment is empty *and* no file is > uploaded. Or maybe they want you to add a comment with your file -- > that's fine too, but the error isn't very clear. > > http or https made no difference. I used NS 4.72 on Linux; Tim used > IE and had the same results. BTW, this may be more pervasive: I recall that in the wee hours, I kept getting "ERROR: nothing changed" rejections when I was just trying to clean up old reports via doing nothing but changing the assigned-to (for example) dropdown list value. Perhaps they want a comment with every change of any kind now? From guido at digicool.com Wed Feb 28 01:46:14 2001 From: guido at digicool.com (Guido van Rossum) Date: Tue, 27 Feb 2001 19:46:14 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: Your message of "Tue, 27 Feb 2001 19:38:15 EST." References: Message-ID: <200102280046.TAA19712@cj20424-a.reston1.va.home.com> > BTW, this may be more pervasive: I recall that in the wee hours, I kept > getting "ERROR: nothing changed" rejections when I was just trying to clean > up old reports via doing nothing but changing the assigned-to (for example) > dropdown list value. Perhaps they want a comment with every change of any > kind now? Which in itself is not a bad policy. But the error sucks. --Guido van Rossum (home page: http://www.python.org/~guido/) From sdm7g at virginia.edu Wed Feb 28 02:59:56 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Tue, 27 Feb 2001 20:59:56 -0500 (EST) Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: On Tue, 27 Feb 2001, Tim Peters wrote: > Please just solve the problem for the platforms you're actually running on -- > case-insensitive filesystems are not "Unix only" in any meaningful sense of > that phrase, and each not-really-Unix platform is likely to have its own > stupid gimmicks for worming around this problem anyway. For example, Cygwin > defers to the Windows API. Great! That solves the problem there. > Generalization is premature. This isn't an attempt at abstract theorizing: I'm running Darwin with and without MacOSX on top, as well as MkLinux, LinuxPPC, and of course, various versions of "Classic" MacOS on various machines. I would gladly drop the others for MacOSX, but OSX won't run on all of the older machines. I'm hoping those machines will get replaced before I actually have to support all of those flavors, so I'm not trying to bend over backwards to be portable, but I'm also trying not to shoot myself in the foot by being overly un-general! It's not, for me, being any more premature than you wondering if the VMS users will scream at the changes. ( Although, in both cases, I think it's reasonable to say: "I thought about it -- now here's what we're going to do anyway!" I suspect that folks running Darwin on Intel are using UFS and don't want the overhead either, but I'm not even trying to generalize to them yet! ) > > In other words: I can rename the current version to check_case and > > fix the args to match. (Although, I recall that the args to check_case > > were rather more awkward to handle, but I'll have to look again. ) > > Good! I'm not going to wait for that, though. I desperately need a nap, but > when I get up I'll check in changes that should be sufficient for the Windows > and Cygwin parts of this, without regressing on other platforms. We'll then > have to figure out whatever #ifdef'ery is needed for your platform(s). __MACH__ is predefined, meaning mach system calls are supported, and __APPLE__ is predefined -- I think it means it's Apple's compiler. So: #if defined(__MACH__) && defined(__APPLE__) ought to uniquely identify Darwin, at least until Apple does another OS. ( Maybe it would be cleaner to have config add -DDarwin switches -- or if you want to get general -D$MACHDEP -- except that I don't think all the values of MACHDEP will parse as symbols. ) -- Steve Majewski From sdm7g at virginia.edu Wed Feb 28 03:16:36 2001 From: sdm7g at virginia.edu (Steven D. Majewski) Date: Tue, 27 Feb 2001 21:16:36 -0500 (EST) Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: On Tue, 27 Feb 2001, Tim Peters wrote: > > check_case will be used differently now. > If check_case will be used differently, then why not just use "#ifdef CHECK_IMPORT_CASE" as the switch? -- Steve Majewski From Jason.Tishler at dothill.com Wed Feb 28 04:32:16 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Tue, 27 Feb 2001 22:32:16 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: ; from tim.one@home.com on Tue, Feb 27, 2001 at 02:27:12PM -0500 References: Message-ID: <20010227223216.C252@dothill.com> Tim, On Tue, Feb 27, 2001 at 02:27:12PM -0500, Tim Peters wrote: > Jason, I *assume* that the existing "#if defined(MS_WIN32) || > defined(__CYGWIN__)" version of check_case works already for you. Scream if > that's wrong. I guess it depends on what you mean by "works." When I submitted my patch to enable case-sensitive imports for Cygwin, I mistakenly thought that I was solving import problems such as "import TERMIOS, termios". Unfortunately, I was only enabling the (old) Win32 "Case mismatch for module name foo" code for Cygwin too. Subsequently, there have been changes to Cygwin gcc that may make it difficult (i.e., require non-standard -I options) to find Win32 header files like "windows.h". So from an ease of building point of view, it would be better to stick with POSIX calls and avoid direct Win32 ones. Unfortunately, from an efficiency point of view, it sounds like this is unavoidable. I would like to test your patch with both Cygwin gcc 2.95.2-6 (i.e., Win32 friendly) and 2.95.2-7 (i.e., Unix bigot). Please let me know when it's ready. Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From Jason.Tishler at dothill.com Wed Feb 28 05:01:51 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Tue, 27 Feb 2001 23:01:51 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: ; from akuchlin@mems-exchange.org on Tue, Feb 27, 2001 at 05:34:06PM -0500 References: Message-ID: <20010227230151.D252@dothill.com> On Tue, Feb 27, 2001 at 05:34:06PM -0500, Andrew Kuchling wrote: > The SourceForge admins couldn't replicate the patch upload problem, > and managed to attach a file to the Python bug report in question, yet > when I try it, it still fails for me. So, a question for this list: > has uploading patches or other files been working for you recently, > particularly today? Maybe with more data, we can see a pattern > (browser version, SSL/non-SSL, cluefulness of user, ...). I still can't upload patch files (even though I always supply a comment). Specifically, I getting the following error message in red at the top of the page after pressing the "Submit Changes" button: ArtifactFile: File name, type, size, and data are RequiredSuccessfully Updated FWIW, I'm using Netscape 4.72 on Windows. Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From tim.one at home.com Wed Feb 28 05:08:05 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 23:08:05 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: >> check_case will be used differently now. [Steven] > If check_case will be used differently, then why not just use > "#ifdef CHECK_IMPORT_CASE" as the switch? Sorry, I don't understand what you have in mind. In my mind, CHECK_IMPORT_CASE goes away, since we're attempting to get the same semantics on all platforms, and a yes/no #define doesn't carry enough info to accomplish that. From tim.one at home.com Wed Feb 28 05:29:33 2001 From: tim.one at home.com (Tim Peters) Date: Tue, 27 Feb 2001 23:29:33 -0500 Subject: [Python-Dev] RE: Case-sensitive import In-Reply-To: <20010227223216.C252@dothill.com> Message-ID: [Tim] >> Jason, I *assume* that the existing "#if defined(MS_WIN32) || >> defined(__CYGWIN__)" version of check_case works already for >> you. Scream if that's wrong. [Jason] > I guess it depends on what you mean by "works." I meant that independent of errors you don't want to see, and independent of the allcaps8x3 silliness, check_case returns 1 if there's a case-sensitive match and 0 if not. > When I submitted my patch to enable case-sensitive imports for Cygwin, > I mistakenly thought that I was solving import problems such as "import > TERMIOS, termios". Unfortunately, I was only enabling the (old) Win32 > "Case mismatch for module name foo" code for Cygwin too. Then if you succeeded in enabling that, "it works" in the sense I meant. My intent is to stop the errors, take away the allcaps8x3 stuff, and change the *calling* code to just keep going when check_case returns 0. > Subsequently, there have been changes to Cygwin gcc that may make it > difficult (i.e., require non-standard -I options) to find Win32 header > files like "windows.h". So from an ease of building point of view, it > would be better to stick with POSIX calls and avoid direct Win32 ones. > Unfortunately, from an efficiency point of view, it sounds like this is > unavoidable. > > I would like to test your patch with both Cygwin gcc 2.95.2-6 (i.e., > Win32 friendly) and 2.95.2-7 (i.e., Unix bigot). Please let me know > when it's ready. Not terribly long after I get to stop writing email <0.9 wink>. But since the only platform I can test here is plain Windows, and Cygwin and sundry Mac variations appear to be moving targets, once it works on Windows I'm just going to check it in. You and Steven will then have to figure out what you need to do on your platforms. OK by me if you two recreate the HAVE_DIRENT_H stuff, but (a) not if Linux takes that path too; and, (b) if Cygwin ends up using that, please get rid of the Cygwin-specific tricks in the plain Windows case (this module is already one of the hardest to maintain, and having random pieces of #ifdef'ed code in it that will never be used hurts). From barry at digicool.com Wed Feb 28 06:05:30 2001 From: barry at digicool.com (Barry A. Warsaw) Date: Wed, 28 Feb 2001 00:05:30 -0500 Subject: [Python-Dev] Case-sensitive import References: Message-ID: <15004.34586.744058.938851@anthem.wooz.org> >>>>> "SDM" == Steven D Majewski writes: SDM> Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a SDM> difference in the linux dirent structs. Actually, my Linux distro's dirent.h has almost the same test on _DIRENT_HAVE_D_NAMLEN (sic) -- which looking again now at import.c it's obvious I misspelled it! Tim, if you clean this code up and decide to continue to use the d_namlen slot, please fix the macro test. -Barry From akuchlin at cnri.reston.va.us Wed Feb 28 06:21:54 2001 From: akuchlin at cnri.reston.va.us (Andrew Kuchling) Date: Wed, 28 Feb 2001 00:21:54 -0500 Subject: [Python-Dev] Re: Patch uploads broken In-Reply-To: <20010227230151.D252@dothill.com>; from Jason.Tishler@dothill.com on Tue, Feb 27, 2001 at 11:01:51PM -0500 References: <20010227230151.D252@dothill.com> Message-ID: <20010228002154.A16737@newcnri.cnri.reston.va.us> On Tue, Feb 27, 2001 at 11:01:51PM -0500, Jason Tishler wrote: >I still can't upload patch files (even though I always supply a comment). >Specifically, I getting the following error message in red at the top >of the page after pressing the "Submit Changes" button: Same here. It's not from leaving the comment field empty (I got the error message too and figured out what it meant); instead I can fill in a comment, select a file, and upload it. The comment shows up; the attachment doesn't (using NS4.75 on Linux). Anyone got other failures to report? --amk From jeremy at alum.mit.edu Wed Feb 28 06:28:08 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 00:28:08 -0500 (EST) Subject: [Python-Dev] puzzled about old checkin to pythonrun.c Message-ID: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> Fred, You made a change to the syntax error generation code last August. I don't understand what the code is doing. It appears that the code you added is redundant, but it's hard to tell for sure because responsbility for generating well-formed SyntaxErrors is spread across several files. The code you added in pythonrun.c, line 1084, in err_input(), starts with the test (v != NULL): w = Py_BuildValue("(sO)", msg, v); PyErr_SetObject(errtype, w); Py_XDECREF(w); if (v != NULL) { PyObject *exc, *tb; PyErr_Fetch(&errtype, &exc, &tb); PyErr_NormalizeException(&errtype, &exc, &tb); if (PyObject_SetAttrString(exc, "filename", PyTuple_GET_ITEM(v, 0))) PyErr_Clear(); if (PyObject_SetAttrString(exc, "lineno", PyTuple_GET_ITEM(v, 1))) PyErr_Clear(); if (PyObject_SetAttrString(exc, "offset", PyTuple_GET_ITEM(v, 2))) PyErr_Clear(); Py_DECREF(v); PyErr_Restore(errtype, exc, tb); } What's weird about this code is that the __init__ code for a SyntaxError (all errors will be SyntaxErrors at this point) sets filename, lineno, and offset. Each of the values is passed to the constructor as the tuple v; then the new code gets the items out of the tuple and sets the explicitly. You also made a bunch of changes to SyntaxError__str__ at the same time. I wonder if they were sufficient to fix the bug (which has tracker aid 210628 incidentally). Can you shed any light? Jeremy From tim.one at home.com Wed Feb 28 06:48:57 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 00:48:57 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message-ID: Here's the checkin comment for rev 2.163 of import.c: """ Implement PEP 235: Import on Case-Insensitive Platforms. http://python.sourceforge.net/peps/pep-0235.html Renamed check_case to case_ok. Substantial code rearrangement to get this stuff in one place in the file. Innermost loop of find_module() now much simpler and #ifdef-free, and I want to keep it that way (it's bad enough that the innermost loop is itself still in an #ifdef!). Windows semantics tested and are fine. Jason, Cygwin *should* be fine if and only if what you did for check_case() before still "works". Jack, the semantics on your flavor of Mac have definitely changed (see the PEP), and need to be tested. The intent is that your flavor of Mac now work the same as everything else in the "lower left" box, including respecting PYTHONCASEOK. There is a non-zero chance that I already changed the "#ifdef macintosh" code correctly to achieve that. Steven, sorry, you did the most work here so far but you got screwed the worst. Happy to work with you on repairing it, but I don't understand anything about all your Mac variants and don't have time to learn before the beta. We need to add another branch (or two, three, ...?) inside case_ok for you. But we should not need to change anything else. """ Someone please check Linux etc too, although everything that doesn't match one of these #ifdef's: #if defined(MS_WIN32) || defined(__CYGWIN__) #elif defined(DJGPP) #elif defined(macintosh) *should* act as if the platform filesystem were case-sensitive (i.e., that if fopen() succeeds, the case must match already and so there's no need for any more work to check that). Jason, if Cygwin is broken, please coordinate with Steven since you two appear to have similar problems then. [Steven] > __MACH__ is predefined, meaning mach system calls are supported, and > __APPLE__ is predefined -- I think it means it's Apple's compiler. So: > > #if defined(__MACH__) && defined(__APPLE__) > > ought to uniquely identify Darwin, at least until Apple does another OS. > > ( Maybe it would be cleaner to have config add -DDarwin switches -- or > if you want to get general -D$MACHDEP -- except that I don't think all > the values of MACHDEP will parse as symbols. ) This is up to you. I'm sorry to have broken your old code, but Barry should not have accepted it to begin with . Speaking of which, [Barry] > SDM> Yes. Barry added the _DIRENT_HAVE_D_NAMELINE to handle a > SDM> difference in the linux dirent structs. > > Actually, my Linux distro's dirent.h has almost the same test on > _DIRENT_HAVE_D_NAMLEN (sic) -- which looking again now at import.c > it's obvious I misspelled it! > > Tim, if you clean this code up and decide to continue to use the > d_namlen slot, please fix the macro test. For now, I didn't change anything in the MatchFilename function, but put the entire thing in an "#if 0" block with an "XXX" comment, to make it easy for Steven and/or Jason to get at that source if one or both decide their platforms still need something like that. If they do, I'll double-check that this #define is spelled correctly when they check in their changes; else I'll delete that block before the release. Aren't release crunches great? Afraid they're infectious <0.5 wink>. From fdrake at acm.org Wed Feb 28 07:50:28 2001 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Wed, 28 Feb 2001 01:50:28 -0500 (EST) Subject: [Python-Dev] Re: puzzled about old checkin to pythonrun.c In-Reply-To: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15004.40884.236605.266085@cj42289-a.reston1.va.home.com> Jeremy Hylton writes: > Can you shed any light? Not at this hour -- fading fast. I'll look at it in the morning. -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations From moshez at zadka.site.co.il Wed Feb 28 11:43:08 2001 From: moshez at zadka.site.co.il (Moshe Zadka) Date: Wed, 28 Feb 2001 12:43:08 +0200 (IST) Subject: [Python-Dev] urllib2 and urllib Message-ID: <20010228104308.BAB5BAA6A@darjeeling.zadka.site.co.il> (Full disclosure: I've been payed to hack on urllib2) For a long time I've been feeling that urllib is a bit hackish, and not really suited to conveniently script web sites. The classic example is the interface to passwords, whose default behaviour is to stop and ask the user(!). Jeremy had urllib2 out for about a year and a half, and now that I've finally managed to have a look at it, I'm very impressed with the architecture, and I think it's superior to urllib. From pedroni at inf.ethz.ch Wed Feb 28 15:21:35 2001 From: pedroni at inf.ethz.ch (Samuele Pedroni) Date: Wed, 28 Feb 2001 15:21:35 +0100 (MET) Subject: [Python-Dev] pdb and nested scopes Message-ID: <200102281421.PAA17150@core.inf.ethz.ch> Hi. Sorry if everybody is already aware of this. I have checked the code for pdb in CVS , especially for the p cmd, maybe I'm wrong but given actual the implementation of things that gives no access to the value of free or cell variables. Should that be fixed? AFAIK pdb as it is works with jython too. So when fixing that, it would be nice if this would be preserved. regards, Samuele Pedroni. From jack at oratrix.nl Wed Feb 28 15:30:37 2001 From: jack at oratrix.nl (Jack Jansen) Date: Wed, 28 Feb 2001 15:30:37 +0100 Subject: [Python-Dev] Case-sensitive import In-Reply-To: Message by barry@digicool.com (Barry A. Warsaw) , Wed, 28 Feb 2001 00:05:30 -0500 , <15004.34586.744058.938851@anthem.wooz.org> Message-ID: <20010228143037.8F32D371690@snelboot.oratrix.nl> Why don't we handle this the same way as, say, PyOS_CheckStack()? I.e. if USE_CHECK_IMPORT_CASE is defined it is necessary to check the case of the imported file (i.e. it's not defined on vanilla unix, defined on most other platforms) and if it is defined we call PyOS_CheckCase(filename, modulename). All these routines can be in different files, for all I care, similar to the dynload_*.c files. -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm From guido at digicool.com Wed Feb 28 16:34:52 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 10:34:52 -0500 Subject: [Python-Dev] pdb and nested scopes In-Reply-To: Your message of "Wed, 28 Feb 2001 15:21:35 +0100." <200102281421.PAA17150@core.inf.ethz.ch> References: <200102281421.PAA17150@core.inf.ethz.ch> Message-ID: <200102281534.KAA28532@cj20424-a.reston1.va.home.com> > Hi. > > Sorry if everybody is already aware of this. No, it's new to me. > I have checked the code for pdb in CVS , especially for the p cmd, > maybe I'm wrong but given actual the implementation of things that > gives no access to the value of free or cell variables. Should that > be fixed? I think so. I've noted that the locals() function also doesn't see cell variables: from __future__ import nested_scopes import pdb def f(): a = 12 print locals() def g(): print a g() a = 100 g() #pdb.set_trace() f() This prints {} 12 100 When I enable the pdb.set_trace() call, indeed the variable a is not found. > AFAIK pdb as it is works with jython too. So when fixing that, it would > be nice if this would be preserved. Yes! --Guido van Rossum (home page: http://www.python.org/~guido/) From Jason.Tishler at dothill.com Wed Feb 28 18:02:29 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Wed, 28 Feb 2001 12:02:29 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: ; from tim.one@home.com on Tue, Feb 27, 2001 at 11:29:33PM -0500 References: <20010227223216.C252@dothill.com> Message-ID: <20010228120229.M449@dothill.com> Tim, On Tue, Feb 27, 2001 at 11:29:33PM -0500, Tim Peters wrote: > Not terribly long after I get to stop writing email <0.9 wink>. But since > the only platform I can test here is plain Windows, and Cygwin and sundry Mac > variations appear to be moving targets, once it works on Windows I'm just > going to check it in. You and Steven will then have to figure out what you > need to do on your platforms. I tested your changes on Cygwin and they work correctly. Thanks very much. Unfortunately, my concerns about building due to your implementation using direct Win32 APIs were realized. This delayed my response. The current Python CVS stills builds OOTB (with the exception of termios) with the current Cygwin gcc (i.e., 2.95.2-6). However, using the next Cygwin gcc (i.e., 2.95.2-8 or later) will require one to configure with: CC='gcc -mwin32' configure ... and the following minor patch be accepted: http://sourceforge.net/tracker/index.php?func=detail&aid=404928&group_id=5470&atid=305470 Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From guido at digicool.com Wed Feb 28 18:12:05 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 12:12:05 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: Your message of "Wed, 28 Feb 2001 12:02:29 EST." <20010228120229.M449@dothill.com> References: <20010227223216.C252@dothill.com> <20010228120229.M449@dothill.com> Message-ID: <200102281712.MAA29568@cj20424-a.reston1.va.home.com> > and the following minor patch be accepted: > > http://sourceforge.net/tracker/index.php?func=detail&aid=404928&group_id=5470&atid=305470 That patch seems fine -- except that I'd like /F to have a quick look since it changes _sre.c. --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik at pythonware.com Wed Feb 28 18:36:09 2001 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 28 Feb 2001 18:36:09 +0100 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _sre.c References: Message-ID: <048b01c0a1ac$f10cf920$e46940d5@hagrid> tim indirectly wrote: > *** _sre.c 2001/01/16 07:37:30 2.52 > --- _sre.c 2001/02/28 16:44:18 2.53 > *************** > *** 2370,2377 **** > }; > > ! void > ! #if defined(WIN32) > ! __declspec(dllexport) > ! #endif > init_sre(void) > { > --- 2370,2374 ---- > }; > > ! DL_EXPORT(void) > init_sre(void) > { after this change, the separate makefile I use to build _sre on Windows no longer works (init_sre isn't exported). I don't really understand the code in config.h, but I've tried defining USE_DL_EXPORT (gives linking problems) and USE_DL_IMPORT (macro redefinition). any ideas? Cheers /F From tim.one at home.com Wed Feb 28 18:36:45 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 12:36:45 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: <20010228120229.M449@dothill.com> Message-ID: [Jason] > I tested your changes on Cygwin and they work correctly. Thanks very much. Good! I guess that just leaves poor Steven hanging (although I've got ~200 emails I haven't gotten to yet, so maybe he's already pulled himself up). > Unfortunately, my concerns about building due to your implementation using > direct Win32 APIs were realized. This delayed my response. > > The current Python CVS stills builds OOTB (with the exception of termios) > with the current Cygwin gcc (i.e., 2.95.2-6). However, using the next > Cygwin gcc (i.e., 2.95.2-8 or later) will require one to configure with: > > CC='gcc -mwin32' configure ... > > and the following minor patch be accepted: > > http://sourceforge.net/tracker/index.php?func=detail&aid=404928&gro > up_id=5470&atid=305470 I checked that patch in already, about 15 minutes after you uploaded it. Is this service, or what?! [Guido] > That patch seems fine -- except that I'd like /F to have a quick look > since it changes _sre.c. Too late and no need. What Jason did to _sre.c is *undo* some Cygwin special-casing; /F will like that. It's trivial anyway. Jason, about this: However, using the next Cygwin gcc (i.e., 2.95.2-8 or later) will require one to configure with: CC='gcc -mwin32' configure ... How can we make that info *useful* to people? The target audience for the Cygwin port probably doesn't search Python-Dev or the Python patches database. So it would be good if you thought about uploading an informational patch to README and Misc/NEWS briefly telling Cygwin folks what they need to know. If you do, I'll look for it and check it in. From tim.one at home.com Wed Feb 28 18:42:12 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 12:42:12 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _sre.c In-Reply-To: <048b01c0a1ac$f10cf920$e46940d5@hagrid> Message-ID: >> *** _sre.c 2001/01/16 07:37:30 2.52 >> --- _sre.c 2001/02/28 16:44:18 2.53 >> *************** >> *** 2370,2377 **** >> }; >> >> ! void >> ! #if defined(WIN32) >> ! __declspec(dllexport) >> ! #endif >> init_sre(void) >> { >> --- 2370,2374 ---- >> }; >> >> ! DL_EXPORT(void) >> init_sre(void) >> { [/F] > after this change, the separate makefile I use to build _sre > on Windows no longer works (init_sre isn't exported). Oops! I tested it on Windows, so it works OK with the std build. > I don't really understand the code in config.h, Nobody does, alas. Mark Hammond and I have a delayed date to rework that. > but I've tried defining USE_DL_EXPORT (gives linking problems) and > USE_DL_IMPORT (macro redefinition). Sounds par for the course. > any ideas? Ya: the great thing about all these macros is that they're usually worse than useless (you try them, they break something). The _sre project has /export:init_sre buried in its link options. DL_EXPORT(void) expands to void. Not pretty, but it's the way everything else (outside the pythoncore project) works too. From jeremy at alum.mit.edu Wed Feb 28 18:58:58 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 12:58:58 -0500 (EST) Subject: [Python-Dev] PEP 227 (was Re: Nested scopes resolution -- you can breathe again!) In-Reply-To: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> References: <200102230259.VAA19238@cj20424-a.reston1.va.home.com> Message-ID: <15005.15458.703037.373890@w221.z064000254.bwi-md.dsl.cnc.net> Last week Guido sent a message about our decisions to make nested scopes an optional feature for 2.1 in advance of their mandatory introduction in Python 2.2. I've included an ndiff of the PEP for reference. The beta release on Friday will contain the features as described in the PEP. Jeremy -: old-pep-0227.txt +: pep-0227.txt PEP: 227 Title: Statically Nested Scopes - Version: $Revision: 1.6 $ ? ^ + Version: $Revision: 1.7 $ ? ^ Author: jeremy at digicool.com (Jeremy Hylton) Status: Draft Type: Standards Track Python-Version: 2.1 Created: 01-Nov-2000 Post-History: Abstract This PEP proposes the addition of statically nested scoping (lexical scoping) for Python 2.1. The current language definition defines exactly three namespaces that are used to resolve names -- the local, global, and built-in namespaces. The addition of nested scopes would allow resolution of unbound local names in enclosing functions' namespaces. One consequence of this change that will be most visible to Python programs is that lambda statements could reference variables in the namespaces where the lambda is defined. Currently, a lambda statement uses default arguments to explicitly creating bindings in the lambda's namespace. Introduction This proposal changes the rules for resolving free variables in - Python functions. The Python 2.0 definition specifies exactly - three namespaces to check for each name -- the local namespace, - the global namespace, and the builtin namespace. According to - this defintion, if a function A is defined within a function B, - the names bound in B are not visible in A. The proposal changes - the rules so that names bound in B are visible in A (unless A + Python functions. The new name resolution semantics will take + effect with Python 2.2. These semantics will also be available in + Python 2.1 by adding "from __future__ import nested_scopes" to the + top of a module. + + The Python 2.0 definition specifies exactly three namespaces to + check for each name -- the local namespace, the global namespace, + and the builtin namespace. According to this definition, if a + function A is defined within a function B, the names bound in B + are not visible in A. The proposal changes the rules so that + names bound in B are visible in A (unless A contains a name - contains a name binding that hides the binding in B). ? ---------------- + binding that hides the binding in B). The specification introduces rules for lexical scoping that are common in Algol-like languages. The combination of lexical scoping and existing support for first-class functions is reminiscent of Scheme. The changed scoping rules address two problems -- the limited - utility of lambda statements and the frequent confusion of new + utility of lagmbda statements and the frequent confusion of new ? + users familiar with other languages that support lexical scoping, e.g. the inability to define recursive functions except at the module level. + + XXX Konrad Hinsen suggests that this section be expanded The lambda statement introduces an unnamed function that contains a single statement. It is often used for callback functions. In the example below (written using the Python 2.0 rules), any name used in the body of the lambda must be explicitly passed as a default argument to the lambda. from Tkinter import * root = Tk() Button(root, text="Click here", command=lambda root=root: root.test.configure(text="...")) This approach is cumbersome, particularly when there are several names used in the body of the lambda. The long list of default arguments obscure the purpose of the code. The proposed solution, in crude terms, implements the default argument approach automatically. The "root=root" argument can be omitted. + The new name resolution semantics will cause some programs to + behave differently than they did under Python 2.0. In some cases, + programs will fail to compile. In other cases, names that were + previously resolved using the global namespace will be resolved + using the local namespace of an enclosing function. In Python + 2.1, warnings will be issued for all program statement that will + behave differently. + Specification Python is a statically scoped language with block structure, in the traditional of Algol. A code block or region, such as a - module, class defintion, or function body, is the basic unit of a + module, class definition, or function body, is the basic unit of a ? + program. Names refer to objects. Names are introduced by name binding operations. Each occurrence of a name in the program text refers to the binding of that name established in the innermost function block containing the use. The name binding operations are assignment, class and function definition, and import statements. Each assignment or import statement occurs within a block defined by a class or function definition or at the module level (the top-level code block). If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. (Note: This can lead to errors when a name is used within a block before it is bound.) If the global statement occurs within a block, all uses of the name specified in the statement refer to the binding of that name in the top-level namespace. Names are resolved in the top-level namespace by searching the global namespace, the namespace of the module containing the code block, and the builtin namespace, the namespace of the module __builtin__. The global namespace is searched first. If the name is not found there, the builtin - namespace is searched. + namespace is searched. The global statement must precede all uses + of the name. If a name is used within a code block, but it is not bound there and is not declared global, the use is treated as a reference to the nearest enclosing function region. (Note: If a region is contained within a class definition, the name bindings that occur in the class block are not visible to enclosed functions.) A class definition is an executable statement that may uses and definitions of names. These references follow the normal rules for name resolution. The namespace of the class definition becomes the attribute dictionary of the class. The following operations are name binding operations. If they occur within a block, they introduce new local names in the current block unless there is also a global declaration. - Function defintion: def name ... + Function definition: def name ... ? + Class definition: class name ... Assignment statement: name = ... Import statement: import name, import module as name, from module import name Implicit assignment: names are bound by for statements and except clauses The arguments of a function are also local. There are several cases where Python statements are illegal when used in conjunction with nested scopes that contain free variables. If a variable is referenced in an enclosing scope, it is an error to delete the name. The compiler will raise a SyntaxError for 'del name'. - If the wildcard form of import (import *) is used in a function + If the wild card form of import (import *) is used in a function ? + and the function contains a nested block with free variables, the compiler will raise a SyntaxError. If exec is used in a function and the function contains a nested block with free variables, the compiler will raise a SyntaxError - unless the exec explicit specifies the local namespace for the + unless the exec explicitly specifies the local namespace for the ? ++ exec. (In other words, "exec obj" would be illegal, but "exec obj in ns" would be legal.) + If a name bound in a function scope is also the name of a module + global name or a standard builtin name and the function contains a + nested function scope that references the name, the compiler will + issue a warning. The name resolution rules will result in + different bindings under Python 2.0 than under Python 2.2. The + warning indicates that the program may not run correctly with all + versions of Python. + Discussion The specified rules allow names defined in a function to be referenced in any nested function defined with that function. The name resolution rules are typical for statically scoped languages, with three primary exceptions: - Names in class scope are not accessible. - The global statement short-circuits the normal rules. - Variables are not declared. Names in class scope are not accessible. Names are resolved in - the innermost enclosing function scope. If a class defintion + the innermost enclosing function scope. If a class definition ? + occurs in a chain of nested scopes, the resolution process skips class definitions. This rule prevents odd interactions between class attributes and local variable access. If a name binding - operation occurs in a class defintion, it creates an attribute on + operation occurs in a class definition, it creates an attribute on ? + the resulting class object. To access this variable in a method, or in a function nested within a method, an attribute reference must be used, either via self or via the class name. An alternative would have been to allow name binding in class scope to behave exactly like name binding in function scope. This rule would allow class attributes to be referenced either via attribute reference or simple name. This option was ruled out because it would have been inconsistent with all other forms of class and instance attribute access, which always use attribute references. Code that used simple names would have been obscure. The global statement short-circuits the normal rules. Under the proposal, the global statement has exactly the same effect that it - does for Python 2.0. It's behavior is preserved for backwards ? - + does for Python 2.0. Its behavior is preserved for backwards compatibility. It is also noteworthy because it allows name binding operations performed in one block to change bindings in another block (the module). Variables are not declared. If a name binding operation occurs anywhere in a function, then that name is treated as local to the function and all references refer to the local binding. If a reference occurs before the name is bound, a NameError is raised. The only kind of declaration is the global statement, which allows programs to be written using mutable global variables. As a consequence, it is not possible to rebind a name defined in an enclosing scope. An assignment operation can only bind a name in the current scope or in the global scope. The lack of declarations and the inability to rebind names in enclosing scopes are unusual for lexically scoped languages; there is typically a mechanism to create name bindings (e.g. lambda and let in Scheme) and a mechanism to change the bindings (set! in Scheme). XXX Alex Martelli suggests comparison with Java, which does not allow name bindings to hide earlier bindings. Examples A few examples are included to illustrate the way the rules work. XXX Explain the examples >>> def make_adder(base): ... def adder(x): ... return base + x ... return adder >>> add5 = make_adder(5) >>> add5(6) 11 >>> def make_fact(): ... def fact(n): ... if n == 1: ... return 1L ... else: ... return n * fact(n - 1) ... return fact >>> fact = make_fact() >>> fact(7) 5040L >>> def make_wrapper(obj): ... class Wrapper: ... def __getattr__(self, attr): ... if attr[0] != '_': ... return getattr(obj, attr) ... else: ... raise AttributeError, attr ... return Wrapper() >>> class Test: ... public = 2 ... _private = 3 >>> w = make_wrapper(Test()) >>> w.public 2 >>> w._private Traceback (most recent call last): File " ", line 1, in ? AttributeError: _private - An example from Tim Peters of the potential pitfalls of nested scopes ? ^ -------------- + An example from Tim Peters demonstrates the potential pitfalls of ? +++ ^^^^^^^^ - in the absence of declarations: + nested scopes in the absence of declarations: ? ++++++++++++++ i = 6 def f(x): def g(): print i # ... # skip to the next page # ... for i in x: # ah, i *is* local to f, so this is what g sees pass g() The call to g() will refer to the variable i bound in f() by the for loop. If g() is called before the loop is executed, a NameError will be raised. XXX need some counterexamples Backwards compatibility There are two kinds of compatibility problems caused by nested scopes. In one case, code that behaved one way in earlier - versions, behaves differently because of nested scopes. In the ? - + versions behaves differently because of nested scopes. In the other cases, certain constructs interact badly with nested scopes and will trigger SyntaxErrors at compile time. The following example from Skip Montanaro illustrates the first kind of problem: x = 1 def f1(): x = 2 def inner(): print x inner() Under the Python 2.0 rules, the print statement inside inner() refers to the global variable x and will print 1 if f1() is called. Under the new rules, it refers to the f1()'s namespace, the nearest enclosing scope with a binding. The problem occurs only when a global variable and a local variable share the same name and a nested function uses that name to refer to the global variable. This is poor programming practice, because readers will easily confuse the two different variables. One example of this problem was found in the Python standard library during the implementation of nested scopes. To address this problem, which is unlikely to occur often, a static analysis tool that detects affected code will be written. - The detection problem is straightfoward. + The detection problem is straightforward. ? + - The other compatibility problem is casued by the use of 'import *' ? - + The other compatibility problem is caused by the use of 'import *' ? + and 'exec' in a function body, when that function contains a nested scope and the contained scope has free variables. For example: y = 1 def f(): exec "y = 'gotcha'" # or from module import * def g(): return y ... At compile-time, the compiler cannot tell whether an exec that - operators on the local namespace or an import * will introduce ? ^^ + operates on the local namespace or an import * will introduce ? ^ name bindings that shadow the global y. Thus, it is not possible to tell whether the reference to y in g() should refer to the global or to a local name in f(). In discussion of the python-list, people argued for both possible interpretations. On the one hand, some thought that the reference in g() should be bound to a local y if one exists. One problem with this interpretation is that it is impossible for a human reader of the code to determine the binding of y by local inspection. It seems likely to introduce subtle bugs. The other interpretation is to treat exec and import * as dynamic features that do not effect static scoping. Under this interpretation, the exec and import * would introduce local names, but those names would never be visible to nested scopes. In the specific example above, the code would behave exactly as it did in earlier versions of Python. - Since each interpretation is problemtatic and the exact meaning ? - + Since each interpretation is problematic and the exact meaning ambiguous, the compiler raises an exception. A brief review of three Python projects (the standard library, Zope, and a beta version of PyXPCOM) found four backwards compatibility issues in approximately 200,000 lines of code. There was one example of case #1 (subtle behavior change) and two examples of import * problems in the standard library. (The interpretation of the import * and exec restriction that was implemented in Python 2.1a2 was much more restrictive, based on language that in the reference manual that had never been enforced. These restrictions were relaxed following the release.) + Compatibility of C API + + The implementation causes several Python C API functions to + change, including PyCode_New(). As a result, C extensions may + need to be updated to work correctly with Python 2.1. + locals() / vars() These functions return a dictionary containing the current scope's local variables. Modifications to the dictionary do not affect the values of variables. Under the current rules, the use of locals() and globals() allows the program to gain access to all the namespaces in which names are resolved. An analogous function will not be provided for nested scopes. Under this proposal, it will not be possible to gain dictionary-style access to all visible scopes. + Warnings and Errors + + The compiler will issue warnings in Python 2.1 to help identify + programs that may not compile or run correctly under future + versions of Python. Under Python 2.2 or Python 2.1 if the + nested_scopes future statement is used, which are collectively + referred to as "future semantics" in this section, the compiler + will issue SyntaxErrors in some cases. + + The warnings typically apply when a function that contains a + nested function that has free variables. For example, if function + F contains a function G and G uses the builtin len(), then F is a + function that contains a nested function (G) with a free variable + (len). The label "free-in-nested" will be used to describe these + functions. + + import * used in function scope + + The language reference specifies that import * may only occur + in a module scope. (Sec. 6.11) The implementation of C + Python has supported import * at the function scope. + + If import * is used in the body of a free-in-nested function, + the compiler will issue a warning. Under future semantics, + the compiler will raise a SyntaxError. + + bare exec in function scope + + The exec statement allows two optional expressions following + the keyword "in" that specify the namespaces used for locals + and globals. An exec statement that omits both of these + namespaces is a bare exec. + + If a bare exec is used in the body of a free-in-nested + function, the compiler will issue a warning. Under future + semantics, the compiler will raise a SyntaxError. + + local shadows global + + If a free-in-nested function has a binding for a local + variable that (1) is used in a nested function and (2) is the + same as a global variable, the compiler will issue a warning. + Rebinding names in enclosing scopes There are technical issues that make it difficult to support rebinding of names in enclosing scopes, but the primary reason that it is not allowed in the current proposal is that Guido is opposed to it. It is difficult to support, because it would require a new mechanism that would allow the programmer to specify that an assignment in a block is supposed to rebind the name in an enclosing block; presumably a keyword or special syntax (x := 3) would make this possible. The proposed rules allow programmers to achieve the effect of rebinding, albeit awkwardly. The name that will be effectively rebound by enclosed functions is bound to a container object. In place of assignment, the program uses modification of the container to achieve the desired effect: def bank_account(initial_balance): balance = [initial_balance] def deposit(amount): balance[0] = balance[0] + amount return balance def withdraw(amount): balance[0] = balance[0] - amount return balance return deposit, withdraw Support for rebinding in nested scopes would make this code clearer. A class that defines deposit() and withdraw() methods and the balance as an instance variable would be clearer still. Since classes seem to achieve the same effect in a more straightforward manner, they are preferred. Implementation The implementation for C Python uses flat closures [1]. Each def or lambda statement that is executed will create a closure if the body of the function or any contained function has free variables. Using flat closures, the creation of closures is somewhat expensive but lookup is cheap. The implementation adds several new opcodes and two new kinds of names in code objects. A variable can be either a cell variable or a free variable for a particular code object. A cell variable is referenced by containing scopes; as a result, the function where it is defined must allocate separate storage for it on each - invocation. A free variable is reference via a function's closure. ? --------- + invocation. A free variable is referenced via a function's ? + + closure. + + The choice of free closures was made based on three factors. + First, nested functions are presumed to be used infrequently, + deeply nested (several levels of nesting) still less frequently. + Second, lookup of names in a nested scope should be fast. + Third, the use of nested scopes, particularly where a function + that access an enclosing scope is returned, should not prevent + unreferenced objects from being reclaimed by the garbage + collector. XXX Much more to say here References [1] Luca Cardelli. Compiling a functional language. In Proc. of the 1984 ACM Conference on Lisp and Functional Programming, pp. 208-217, Aug. 1984 http://citeseer.nj.nec.com/cardelli84compiling.html From tim.one at home.com Wed Feb 28 19:48:39 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 13:48:39 -0500 Subject: [Python-Dev] Case-sensitive import In-Reply-To: <20010228143037.8F32D371690@snelboot.oratrix.nl> Message-ID: [Jack Jansen] > Why don't we handle this the same way as, say, PyOS_CheckStack()? > > I.e. if USE_CHECK_IMPORT_CASE is defined it is necessary to check > the case of the imported file (i.e. it's not defined on vanilla > unix, defined on most other platforms) and if it is defined we call > PyOS_CheckCase(filename, modulename). > All these routines can be in different files, for all I care, > similar to the dynload_*.c files. A. I want the code in the CVS tree. That some of your Mac code is not in the CVS tree creates problems for everyone (we can never guess whether we're breaking your code because we have no idea what your code is). B. PyOS_CheckCase() is not of general use. It's only of interest inside import.c, so it's better to live there as a static function. C. I very much enjoyed getting rid of the obfuscating #ifdef CHECK_IMPORT_CASE blocks in import.c! This code is hard enough to follow without distributing preprocessor tricks all over the place. Now they live only inside the body of case_ok(), where they're truly needed. That is, case_ok() is a perfectly sensible cross-platfrom abstraction, and *calling* code doesn't need to be bothered with how it's implemented-- or even whether it's needed --on various platfroms. On Linux, case_ok() reduces to the one-liner "return 1;", and I don't mind paying a function call in return for the increase in clarity inside find_module(). D. The schedule says we release the beta tomorrow <0.6 wink>. From Jason.Tishler at dothill.com Wed Feb 28 20:41:37 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Wed, 28 Feb 2001 14:41:37 -0500 Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Modules _sre.c In-Reply-To: <048b01c0a1ac$f10cf920$e46940d5@hagrid>; from fredrik@pythonware.com on Wed, Feb 28, 2001 at 06:36:09PM +0100 References: <048b01c0a1ac$f10cf920$e46940d5@hagrid> Message-ID: <20010228144137.P449@dothill.com> Fredrik, On Wed, Feb 28, 2001 at 06:36:09PM +0100, Fredrik Lundh wrote: > tim indirectly wrote: > > > *** _sre.c 2001/01/16 07:37:30 2.52 > > --- _sre.c 2001/02/28 16:44:18 2.53 > [snip] > > after this change, the separate makefile I use to build _sre > on Windows no longer works (init_sre isn't exported). > > I don't really understand the code in config.h, but I've tried > defining USE_DL_EXPORT (gives linking problems) and > USE_DL_IMPORT (macro redefinition). USE_DL_EXPORT is to be defined only when building the Win32 (and Cygwin) DLL core not when building extensions. When building Win32 Python, USE_DL_IMPORT is implicitly defined in PC/config.h when USE_DL_EXPORT is not. Explicitly defining USE_DL_IMPORT will cause the macro redefinition warning indicated above -- but no other ill or good effect. Another way to solve your problem without using the "/export:init_sre" link option is by patching PC/config.h with the attached. When I was converting Cygwin Python to use a DLL core instead of a static library one, I wondered why the USE_DL_IMPORT case was missing the following: #define DL_EXPORT(RTYPE) __declspec(dllexport) RTYPE Anyway, sorry that I caused you some heartache. Jason P.S. If this patch is to be seriously considered, then the analogous change should be done for the other Win32 compilers (e.g. Borland). -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com -------------- next part -------------- Index: config.h =================================================================== RCS file: /cvsroot/python/python/dist/src/PC/config.h,v retrieving revision 1.49 diff -u -r1.49 config.h --- config.h 2001/02/28 08:15:16 1.49 +++ config.h 2001/02/28 19:16:52 @@ -118,6 +118,7 @@ #endif #ifdef USE_DL_IMPORT #define DL_IMPORT(RTYPE) __declspec(dllimport) RTYPE +#define DL_EXPORT(RTYPE) __declspec(dllexport) RTYPE #endif #ifdef USE_DL_EXPORT #define DL_IMPORT(RTYPE) __declspec(dllexport) RTYPE From Jason.Tishler at dothill.com Wed Feb 28 21:17:28 2001 From: Jason.Tishler at dothill.com (Jason Tishler) Date: Wed, 28 Feb 2001 15:17:28 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: ; from tim.one@home.com on Wed, Feb 28, 2001 at 12:36:45PM -0500 References: <20010228120229.M449@dothill.com> Message-ID: <20010228151728.Q449@dothill.com> Tim, On Wed, Feb 28, 2001 at 12:36:45PM -0500, Tim Peters wrote: > I checked that patch in already, about 15 minutes after you uploaded it. Is > this service, or what?! Yes! Thanks again. > [Guido] > > That patch seems fine -- except that I'd like /F to have a quick look > > since it changes _sre.c. > > Too late and no need. What Jason did to _sre.c is *undo* some Cygwin > special-casing; Not really -- I was trying to get rid of WIN32 #ifdefs. My solution was to attempt to reuse the DL_EXPORT macro. Now I realize that I should have done the following instead: #if defined(WIN32) || defined(__CYGWIN__) __declspec(dllexport) #endif > /F will like that. Apparently not. > It's trivial anyway. I thought so too. > Jason, about this: > > However, using the next Cygwin gcc (i.e., 2.95.2-8 or later) will > require one to configure with: > > CC='gcc -mwin32' configure ... > > How can we make that info *useful* to people? I have posted to the Cygwin mailing list and C.L.P regarding my original 2.0 patches. I have also continue to post to Cygwin regarding 2.1a1 and 2.1a2. I intended to do likewise for 2.1b1, etc. > The target audience for the > Cygwin port probably doesn't search Python-Dev or the Python patches > database. Agreed -- the above was only offered to the curious Python-Dev person and not for archival purposes. > So it would be good if you thought about uploading an > informational patch to README and Misc/NEWS briefly telling Cygwin folks what > they need to know. If you do, I'll look for it and check it in. I will submit a patch to README to add a Cygwin section to "Platform specific notes". Unfortunately, I don't think that I can squeeze it in by 2.1b1. If not, then I will submit it for the next release (2.1b2 or 2.1 final). I also don't mind waiting for the Cygwin gcc stuff to settle down too. I know...excuses, excuses... Thanks, Jason -- Jason Tishler Director, Software Engineering Phone: +1 (732) 264-8770 x235 Dot Hill Systems Corp. Fax: +1 (732) 264-8798 82 Bethany Road, Suite 7 Email: Jason.Tishler at dothill.com Hazlet, NJ 07730 USA WWW: http://www.dothill.com From tim.one at home.com Wed Feb 28 23:12:47 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:12:47 -0500 Subject: [Python-Dev] test_inspect.py still fails under -O In-Reply-To: Message-ID: > python -O ../lib/test/test_inspect.py Traceback (most recent call last): File "../lib/test/test_inspect.py", line 172, in ? 'trace() row 1') File "../lib/test/test_inspect.py", line 70, in test raise TestFailed, message % args test_support.TestFailed: trace() row 1 > git.tr[0][1:] is ('@test', 8, 'spam', ['def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h):\n'], 0) at this point. The test expects it to be ('@test', 9, 'spam', [' eggs(b + d, c + f)\n'], 0) Test passes without -O. This was on Windows. Other platforms? From tim.one at home.com Wed Feb 28 23:21:02 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:21:02 -0500 Subject: [Python-Dev] Re: Case-sensitive import In-Reply-To: <20010228151728.Q449@dothill.com> Message-ID: [Jason Tishler] > ... > Not really -- I was trying to get rid of WIN32 #ifdefs. My solution was > to attempt to reuse the DL_EXPORT macro. Now I realize that I should > have done the following instead: > > #if defined(WIN32) || defined(__CYGWIN__) > __declspec(dllexport) > #endif Na, you did good! If /F wants to bark at someone, he should bark at me, because I reviewed the patch carefully before checking it in and it's the same thing I would have done. MarkH and I have long-delayed plans to change these macro schemes to make some sense, and the existing DL_EXPORT uses-- no matter how useless now --will be handy to look for when we change the appropriate ones to, e.g., DL_MODULE_ENTRY_POINT macros (that always expand to the correct platform-specific decl gimmicks). _sre.c was the oddball here. > ... > I will submit a patch to README to add a Cygwin section to "Platform > specific notes". Unfortunately, I don't think that I can squeeze it in > by 2.1b1. If not, then I will submit it for the next release (2.1b2 or 2.1 > final). I also don't mind waiting for the Cygwin gcc stuff to settle > down too. I know...excuses, excuses... That's fine. You know the Cygwin audience better than I do -- as I've proved beyond reasonable doubt several times . And thank you for your Cygwin work -- someday I hope to use Cygwin for more than just running "patch" on this box ... From martin at loewis.home.cs.tu-berlin.de Wed Feb 28 23:19:13 2001 From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis) Date: Wed, 28 Feb 2001 23:19:13 +0100 Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax Message-ID: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> PEP 236 states that the intention of the proposed feature is to allow modules "to request that the code in module M use the new syntax or semantics in the current release C". It achieves this by introducing a new statement, the future_statement. This looks like an import statement, but isn't. The PEP author admits that 'overloading "import" does suck'. I agree (not surprisingly, since Tim added this QA item after we discussed it in email). It also says "But if we introduce a new keyword, that in itself would break old code". Here I disagree, and I propose patch 404997 as an alternative (https://sourceforge.net/tracker/index.php?func=detail&aid=404997&group_id=5470&atid=305470) In essence, with that patch, you would write directive nested_scopes instead of from __future__ import nested_scopes This looks like as it would add a new keyword directive, and thus break code that uses "directive" as an identifier, but it doesn't. In this release, "directive" is only a keyword if it is the first keyword in a file (i.e. potentially after a doc string, but not after any other keyword). So class directive: def __init__(self, directive): self.directive = directive continues to work as it did in previous releases (it does not even produce a warning, but could if desired). Only when you do directive nested_scopes directive braces class directive: def __init__(self, directive): self.directive = directive you get a syntax error, since "directive" is then a keyword in that module. The directive statement has a similar syntax to the C #pragma "statement", in that each directive has a name and an optional argument. The choice of the keyword "directive" is somewhat arbitrary; it was deliberately not "pragma", since that implies an implementation-defined semantics (which directive does not have). In terms of backwards compatibility, it behaves similar to "from __future__ import ...": older releases will give a SyntaxError for the directive syntax (instead of an ImportError, which a __future__ import will give). "Unknown" directives will also give a SyntaxError, similar to the ImportError from the __future__ import. Please let me know what you think. If you think this should be written down in a PEP, I'd request that the specification above is added into PEP 236. Regards, Martin From fredrik at effbot.org Wed Feb 28 23:42:56 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Wed, 28 Feb 2001 23:42:56 +0100 Subject: [Python-Dev] test_inspect.py still fails under -O References: Message-ID: <06c501c0a1d7$cdd346f0$e46940d5@hagrid> tim wrote: > git.tr[0][1:] is > > ('@test', 8, 'spam', > ['def spam(a, b, c, d=3, (e, (f,))=(4, (5,)), *g, **h):\n'], > 0) > > at this point. The test expects it to be > > ('@test', 9, 'spam', > [' eggs(b + d, c + f)\n'], > 0) > > Test passes without -O. the code doesn't take LINENO optimization into account. tentative patch follows: Index: Lib/inspect.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/inspect.py,v retrieving revision 1.2 diff -u -r1.2 inspect.py --- Lib/inspect.py 2001/02/28 08:26:44 1.2 +++ Lib/inspect.py 2001/02/28 22:35:49 @@ -561,19 +561,19 @@ filename = getsourcefile(frame) if context > 0: - start = frame.f_lineno - 1 - context/2 + start = _lineno(frame) - 1 - context/2 try: lines, lnum = findsource(frame) start = max(start, 1) start = min(start, len(lines) - context) lines = lines[start:start+context] - index = frame.f_lineno - 1 - start + index = _lineno(frame) - 1 - start except: lines = index = None else: lines = index = None - return (filename, frame.f_lineno, frame.f_code.co_name, lines, index) + return (filename, _lineno(frame), frame.f_code.co_name, lines, index) def getouterframes(frame, context=1): """Get a list of records for a frame and all higher (calling) frames. @@ -614,3 +614,26 @@ def trace(context=1): """Return a list of records for the stack below the current exception.""" return getinnerframes(sys.exc_traceback, context) + +def _lineno(frame): + # Coded by Marc-Andre Lemburg from the example of PyCode_Addr2Line() + # in compile.c. + # Revised version by Jim Hugunin to work with JPython too. + # Adapted for inspect.py by Fredrik Lundh + + lineno = frame.f_lineno + + c = frame.f_code + if not hasattr(c, 'co_lnotab'): + return tb.tb_lineno + + tab = c.co_lnotab + line = c.co_firstlineno + stopat = frame.f_lasti + addr = 0 + for i in range(0, len(tab), 2): + addr = addr + ord(tab[i]) + if addr > stopat: + break + line = line + ord(tab[i+1]) + return line Cheers /F From tim.one at home.com Wed Feb 28 23:42:16 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:42:16 -0500 Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax In-Reply-To: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> Message-ID: [Martin v. Loewis] > ... > If you think this should be written down in a PEP, Yes. > I'd request that the specification above is added into PEP 236. No -- PEP 236 is not a general directive PEP, no matter how much that what you *want* is a general directive PEP. I'll add a Q/A pair to 236 about why it's not a general directive PEP, but that's it. PEP 236 stands on its own for what it's designed for; your PEP should stand on its own for what *it's* designed for (which isn't nested_scopes et alia, it's character encodings). (BTW, there is no patch attached to patch 404997 -- see other recent msgs about people having problems uploading files to SF; maybe you could just put a patch URL in a comment now?] From fredrik at effbot.org Wed Feb 28 23:49:57 2001 From: fredrik at effbot.org (Fredrik Lundh) Date: Wed, 28 Feb 2001 23:49:57 +0100 Subject: [Python-Dev] test_inspect.py still fails under -O References: <06c501c0a1d7$cdd346f0$e46940d5@hagrid> Message-ID: <071401c0a1d8$c830e7b0$e46940d5@hagrid> I wrote: > + lineno = frame.f_lineno > + > + c = frame.f_code > + if not hasattr(c, 'co_lnotab'): > + return tb.tb_lineno that "return" statement should be: return lineno Cheers /F From guido at digicool.com Wed Feb 28 23:48:51 2001 From: guido at digicool.com (Guido van Rossum) Date: Wed, 28 Feb 2001 17:48:51 -0500 Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax In-Reply-To: Your message of "Wed, 28 Feb 2001 23:19:13 +0100." <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> References: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de> Message-ID: <200102282248.RAA31007@cj20424-a.reston1.va.home.com> Martin, this looks nice, but where's the patch? (Not in the patch mgr.) We're planning the b1 release for Friday -- in two days. We need some time for our code base to stabilize. There's one downside to the "directive" syntax: other tools that parse Python will have to be adapted. The __future__ hack doesn't need that. --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.one at home.com Wed Feb 28 23:52:33 2001 From: tim.one at home.com (Tim Peters) Date: Wed, 28 Feb 2001 17:52:33 -0500 Subject: [Python-Dev] Very recent test_global failure Message-ID: Windows. > python ../lib/test/regrtest.py test_global test_global :2: SyntaxWarning: name 'a' is assigned to before global declaration :2: SyntaxWarning: name 'b' is assigned to before global declaration The actual stdout doesn't match the expected stdout. This much did match (between asterisk lines): ********************************************************************** test_global ********************************************************************** Then ... We expected (repr): 'got SyntaxWarning as e' But instead we got: 'expected SyntaxWarning' test test_global failed -- Writing: 'expected SyntaxWarning', expected: 'got SyntaxWarning as e' 1 test failed: test_global > From jeremy at alum.mit.edu Wed Feb 28 23:40:05 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 17:40:05 -0500 (EST) Subject: [Python-Dev] Very recent test_global failure In-Reply-To: References: Message-ID: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> Just fixed. Guido's new, handy-dandy warning helper for the compiler checks for a warning that has been turned into an error. If the warning becomes an error, the SyntaxWarning is replaced with a SyntaxError. The change broke this test, but was otherwise a good thing. It allows reasonable tracebacks to be produced. Jeremy From jeremy at alum.mit.edu Wed Feb 28 23:48:15 2001 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 28 Feb 2001 17:48:15 -0500 (EST) Subject: [Python-Dev] Very recent test_global failure In-Reply-To: References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> Oops. Missed a checkin to symtable.h. unix-users-prepare-to-recompile-everything-ly y'rs, Jeremy From fred at digicool.com Wed Feb 28 23:35:46 2001 From: fred at digicool.com (Fred L. Drake, Jr.) Date: Wed, 28 Feb 2001 17:35:46 -0500 (EST) Subject: [Python-Dev] Re: puzzled about old checkin to pythonrun.c In-Reply-To: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> References: <15004.35944.494314.814348@w221.z064000254.bwi-md.dsl.cnc.net> Message-ID: <15005.32066.814181.946890@localhost.localdomain> Jeremy Hylton writes: > You made a change to the syntax error generation code last August. > I don't understand what the code is doing. It appears that the code > you added is redundant, but it's hard to tell for sure because > responsbility for generating well-formed SyntaxErrors is spread > across several files. This is probably the biggest reason it's taken so long to get things into the ballpark! > The code you added in pythonrun.c, line 1084, in err_input(), starts > with the test (v != NULL): I've ripped all that out. > Can you shed any light? Was this all the light you needed? Or was there something deeper that I'm missing? -Fred -- Fred L. Drake, Jr. PythonLabs at Digital Creations

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4