2016-10-25 21:55:39 +0000 <mpickering> I ran this with the released ghc
2016-10-25 21:55:42 +0000 <niteria> 7.10.3 is 7s vs 10s for me
2016-10-25 21:55:55 +0000 <niteria> well, again 7.10.3 with extra patches :p
2016-10-25 21:56:16 +0000osa1(~omer@haskell/developer/osa1) (Ping timeout: 260 seconds)
2016-10-25 21:57:17 +0000gustavold(~gustavold@191.255.174.187) (Ping timeout: 244 seconds)
2016-10-25 21:57:21 +0000 <mpickering> I have HEAD build but it's something weird like devel2 with -DDEBUG so probably not worth testing
2016-10-25 21:57:47 +0000jfischoff(~jfischoff@pool-108-41-214-28.nycmny.fios.verizon.net)
2016-10-25 21:58:33 +0000newhoggy(~newhoggy@2405:9000:1400:10:a5d3:7666:7472:cb34)
2016-10-25 21:59:49 +0000 <niteria> released 7.10.2 is 8.4s vs 10.1s
2016-10-25 22:01:32 +0000 <mpickering> What kind of hardware are you running on?
2016-10-25 22:01:55 +0000 <niteria> Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
2016-10-25 22:01:55 +0000 <phaskell> E5: Weekly Infrastructure IRC Meeting - https://phabricator.haskell.org/E5
2016-10-25 22:03:40 +0000 <niteria> HEAD build with perf is 12.4s vs 14.3s
2016-10-25 22:05:50 +0000 <niteria> the variance is pretty high beut the difference noticable
2016-10-25 22:06:51 +0000 <niteria> maybe my cpu is autoscaling
2016-10-25 22:13:51 +0000gustavold(~gustavold@191.255.174.187)
2016-10-25 22:14:14 +0000copumpkin(~copumpkin@haskell/developer/copumpkin) (Quit: Textual IRC Client: www.textualapp.com)
2016-10-25 22:17:52 +0000newhoggy(~newhoggy@2405:9000:1400:10:a5d3:7666:7472:cb34) ()
2016-10-25 22:18:26 +0000jfischoff(~jfischoff@pool-108-41-214-28.nycmny.fios.verizon.net) (Quit: jfischoff)
2016-10-25 22:19:07 +0000newhoggy(~newhoggy@2405:9000:1400:10:787b:cb06:57e5:7a2e)
2016-10-25 22:21:06 +0000copumpkin(~copumpkin@haskell/developer/copumpkin)
2016-10-25 22:24:38 +0000copumpkin(~copumpkin@haskell/developer/copumpkin) (Client Quit)
2016-10-25 22:31:54 +0000adamse_adamse
2016-10-25 22:34:37 +0000javjarfer(~javjarfer@78.250.221.87.dynamic.jazztel.es) (Ping timeout: 265 seconds)
2016-10-25 22:37:58 +0000thc202(~thc202@unaffiliated/thc202) (Ping timeout: 245 seconds)
2016-10-25 22:39:43 +0000 <tibbe> ezyang, late reply: this is for fun
2016-10-25 22:42:38 +0000 <ezyang> savvy
2016-10-25 22:49:00 +0000 <tibbe> ezyang, currently I'm thinking about if I can get rid of Lam from my Core language and instead just treat closures as instances of a Fn type class (assuming I already have an encoding of type classes in my Core language as records of data and pointers to top-level functions)
2016-10-25 22:49:16 +0000 <tibbe> I should say that my core language already have top-level functions
2016-10-25 22:49:27 +0000 <ezyang> tibbe: CBPV argues for something a bit like that
2016-10-25 22:49:30 +0000 <tibbe> so unlike GHC Core I don't need Lam to represent top-level bindings
2016-10-25 22:49:41 +0000 <tibbe> CBPV?
2016-10-25 22:49:46 +0000 <ezyang> call-by-push-value
2016-10-25 22:50:13 +0000 <ezyang> Representing closures as type classes seems a bit backwards though
2016-10-25 22:50:23 +0000 <ezyang> since usually type classes are given meaning by their conversion into dictionaries