Feeds:
Posts
Comments

Archive for January, 2010

A workshop on Algorithmic Aspects of Game Theory and Social Choice will take place at the University of Auckland in New-Zealand on 18-19 February 2010.  Interested speakers are asked to send an abstract by February 1st.

Read Full Post »

Report from SODA

I’m attending this year’s SODA conference, in Austin.  It has been blogged about widely (here, here, here, and here), as well as continuously tweeted. The buzz here is good: it’s a very large conference for TCS (over 300 attendees), and with three parallel sessions, there is almost always something interesting to listen too.  I met many old friends as well as got to meet many of the up and coming younger generation which I haven’t met before.

There have many many talks related to algorithmic game theory this SODA.   Session “5A” was more or less devoted to algorithmic mechanism design.  The first talk was a merger of three separate papers that all basically proved that “VCG-based” combinatorial auctions cannot approximate welfare well even on very restricted valuations for which algorithmically approximation is easy.  The techniques are combinatorial focusing on the VC-dimension and extensions for it.  While this is bad news especial since “VCG-based” mechanisms are still the only known generic ones having incentive compatibility, these results can not rule out other types of incentive compatible mechanisms.  The talk was split between representitives of two of the merged papers, Dave Buchfuhrer and Shaddin Dughmi, who emphasized different variants of the lower bound: Dave talked about the very simple valuations (additive-with-budget-limit) for which the lower bound holds, while Shaddin talked about more general classes (like sub-modular) for which a generic reduction is given and emphasized that the lower bound applies also to some family of randomized mechanisms.  I’ve previously shortly posted about three of the other talks in this session: “price of anarchy for greedy auctions“, “incentive compatible budget elicitation in multi-unit auctions”, and “pricing randomized allocations” (with the talk explaining and motivating the model details that I didn’t fully get from skimming the first version of the paper).  The talk on “utilitarian mechanism design for multi-objective optimization” considered a mechanism design setting in a graph where there are two different “weights” on each edge (“cost” and “length”) and the goal is to minimize the sum of the costs, under a constraint that upper bounds the sum of the lengths.

All the talks in this session were very good, I thought. The usual question of what can a speaker do in his allotted 20 minutes seems to be a bit more difficult in algorithmic game theory due to the double background needed.   It seems that the choice of most of the speakers was the “explain and advertise your result”, rather than “discuss the basic ingredients of your technique” which was taken by the talk on multi-objective optimization.  An approach that was not taken by anyone is “motivate and discuss your model” which is the natural one in economic theory.  Such discussion of the model was called for in many of the papers since there are delicate modeling issues involved in them, but it is probably difficult-to-impossible to take this approach in a 20 minute talk while leaving time for the required background as well as even the statement of the results.

Of the other talks I’ve heard, I was especially intrigued by the talk “On the possibility of faster SAT algorithms” by Mihai Patrascu and Ryan Williams which under very strict assumptions about the running time needed to solve SAT on CNFs (regrading the constant in 2^{O(n)} time) get exact lower bounds for several other problems via pretty simple reduction.  E.g. under an appropriate (extreme but plausible) assumption. a near-tight n^{\Omega(d)} lower bound is obtained for the following problem: given a list of n numbers, find d of them that sum  to 0 .  Even more intriguing, lower bounds on multi-party communication complexity are obtained under similar assumptions about SAT.  The speaker, Ryan, was very careful to point out that due to the extreme assumptions, the meaning of these results is not clear.

SODA 2011 is in SF.  After a prolonged voting process in the business meeting, it was decided that the SODA 2012 will be in Kyoto rather than in Rome, Barcelona, Prague, or NYC.  If only every conference had such an array of alternatives.

Advertisement

Read Full Post »

A summer school on Algorithmic Game Theory will take place on July 4–15, 2010 at Fudan University in Shanghai.  Co-located with Expo 2010 with its expected 70M visitors.

Read Full Post »

Gregory and Paul report from the ICS conference in this guest post:

The inaugural Innovations in Computer Science conference was held in Beijing last week, and if the amazingly high spirits of the participants are anything to go by, the conference was a success.  Our hosts at Tsinghua University’s Institute for Theoretical Computer Science (ITCS) stepped in to an unusual degree to ensure that, despite Beijing’s worst snowstorm in 50 years, we were warm, happy, and on schedule.

One of the talks we enjoyed the most was the paper of Barasz and Vempala, “A New Approach to Strongly Polynomial Linear Programming”, presented on short notice by Avrim Blum. (Avrim has now earned the top spot on our list of substitute presenters.)  The paper introduces a class of algorithms that combines the best aspects of the simplex method and interior point methods in the hopes of lighting the way to a strongly polynomial algorithm for linear programming.  Simplex algorithms, since they are inherently combinatorial in nature, have the unique advantage of having a running time independent of the bit-complexity of their inputs, but analyses of this running time soon get lost in the maze described by the surface of high-dimensional polytopes.  Interior point methods, on the other hand, bypass the intricacies of the surface by taking a “short cut” through the center of the polytope.  What this paper describes is essentially a variant of the simplex method whose vertex-to-vertex transition rule allows for such “short cuts”.   Their main technical results–that the proposed algorithms run efficiently on a class of instances containing all known hard instances for simplex methods–provide ample motivation for follow-up work.   This paper seems to epitomize the objectives of the conference: without the onus of intricate technicalities, it puts forward a new idea that could open up a promising new direction of research.

Perhaps stemming from the idealistic call for papers, and the fact that this was the first meeting of a forward-looking conference, there was a feeling of optimistic camaraderie in the air which could be said to culminate in the final discussion session. The discussion covered reflections on the ICS papers, some plugs for undernourished research areas (most notably Vijay Vazarani’s call to arms to tackle the problems of convex programming with the tools of combinatorial optimization), and a section chaired by next year’s ICS chair, Bernard Chazelle, on what one audience member dubbed “innovations in conference organization”.

The first and last papers of the conference were highlighted by Shafi Goldwasser as providing much-needed input from the theory community to the growing reality of multi-core computing.  The last paper, by Avinatan Hassidim “Cache Replacement Policies for Multicore Processors” generalized the standard caching analyses to the now very pertinent case where multiple cores have access to a single shared cache, and showed that for standard caching algorithms to be efficient, the cache size would need to be prohibitively large.  This has the counterintuitive implication that making caches private would actually help performance.  The opening paper of the conference, by Applebaum, Ishai, and Kushilevitz, “Cryptography by Cellular Automata or How Fast Can Complexity Emerge in Nature?” while not mentioning multicore computation at all, has clear implications for it: cellular automata are in some sense a caricature of the multicore world, where functions requiring only local inputs can be computed in a single step, but communication is slow and tricky.  In this model, they demonstrate encryption, pseudorandom generators, and commitment schemes (under a standard coding theory assumption) that all take just a single step of the cellular automaton, while showing the corresponding impossibility of decryption or signature schemes.

The conference concluded with a lively discussion of suggestions for next year’s ICS.  A popular suggestion was that the format of the sessions should have more variety to explicitly welcome innovation in a wide variety of forms: from traditional papers with theorems and proofs, to surveys/tutorials of promising new directions or techniques (perhaps chosen from a set of submissions), to a formal (refereed) ongoing projects session which may include summaries of false starts, attempts, or background material as an introduction to a problem or set of problems.  (Someone noted that typical open problem sessions are often filled with people’s throw-away problems: a mix of the impossible and the not-worth-my-time; injecting some formality and prestige might change this.)  And, of course, as innovation does not happen in lock-step, a bit more unstructured time for chatting might help.

Along the lines of encouraging more responses to the papers presented, the committee mentioned they had initially planned to have a very brief (five minute?) panel discussion after each session, where volunteers or committee members who, having previously read each of the papers just presented, would provide some contrasting perspective.  This is perhaps motivated by the fact that no amount of perfectly honed rhetoric from an author can match the impact of an “I liked *this* about it” from an impartial third party.

One final point that was touched on was that an increasing fraction of conferences are being videotaped, but these videos never seem to appear online in an accessible way that aims to recreate the experience of going to a talk — perhaps all it would take is something as simple as a webpage with video on one side of the page, and navigable powerpoint slides on the other side of the page.

We look forward to seeing some of these proposals–both mathematical and logistic–carried out in next year’s ICS.   It is certainly a conference to keep an eye on.



Read Full Post »

Vijay Vazirani has spent a considerable amount of time in the last few years on developing combinatorial algorithms for problems (mostly involving market equilibria) that can be solved by general convex programming.  In this guest post Vijay talks about the motivation for this:

Since convex programs can be solved efficiently using “continuous” methods, such as ellipsoid or interior point methods, why bother designing extremely ellaborate and difficult combinatorial algorithms for them? Let me propose the following thought experiment (it is not a difficult one!) to bring home the point. Think of a world in which these continuous methods were developed in the 1920’s and ever since, problems such as network flow and matching, which can be cast as linear programs, were routinely solved using such methods. Then in 1956, Ford and Fulkerson propose their beautiful combinatorial algorithm for max-flow and it is immediately trashed, since it is “needlessly complicated and difficult”. Edmonds’ matching algorithm, which is proposed in 1965, meets a similar fate. As a consequence, in this world, combinatorial optimization is a stillborn field. How much of a tragedy would that be? After all, matchings and max-flows could still be computed efficiently …

Observe that the continuous methods solve the instance at hand, without giving any insight into the entire problem from which the instance arose. In fact, such methods do not even need to be told the problem from which the instance arose — all that is needed is the specific linear or convex program for the given instance. So, in this world, theoreticians (if they can be called that) would not have the deep insights we have into the structure of numerous fundamental combinatorial problems. And you can gauge what kind of a “theory” of algorithms this world would have!

I believe that if our research community does not pursue the design of combinatorial algorithms for convex programs, our world would also lose out on a beautiful, deep theory. My lone work on this topic, some joint with my students, is too insignificant an effort in comparison with the ideas, structures and methods that still need to be unearthed. My inability to convince people so far, on what had seemed to me an obvious direction worth pursuing, is the reason for this post.

The convex programs I have studied over the last 8 years arose in AGT, in the context of market equilibria and Nash bargaining games. The theory so far has started forming around a remarkable convex program given by Eisenberg and Gale in 1959. In order to solve these nonlinear programs combinatorially, the classical primal-dual algorithm design paradigm had to be extended from its previous setting of LP-duality theory to the more general setting of convex programs and KKT conditions. The algorithms are non-trivial and require substantial structural insights. In turn, these structural insights provide a starting point for tackling more general problems. To get an idea of how rich the theory is already, consider the following episode. A few months ago, Gagan Goel and I started designing a combinatorial algorithm for a certain Nash bargaining game under peicewise-linear, concave utility functions; the linear case had already been solved. Out of the structural insights gained, we managed to define a new, natural market that models perfect price discrimination and in which buyers have peicewise-linear, concave utility functions (the usual Fisher and Arrow-Debreu markets were recently shown to be PPAD-complete under these utility functions). The equilibrium of our market is captured by a generalization of the Eisenberg-Gale program and is polynomial time computable — even combinatorially. In addition, the convex program yields very simple proofs of both welfare theorems for this market!

It is important to point out immediately that my work is limited to a very thin sliver of convex programs — depite their nonlinearity, these programs admit rational solutions. How about attacking more general classes of convex programs combinatorially, perhaps via approximation algorithms, since they won’t admit rational solutions? Considering how extensive the theory is already, I believe it cannot exist in islolation and is indicative of the tip of an iceberg. We need a lot more smart people to ponder on these issues to find the rest of the iceberg!

Read Full Post »

Facebook has announced a pretty nice graduate student fellowship program.  The first three reserach areas they mention are:

  • Internet Economics: auction theory and algorithmic game theory relevant to online advertising auctions.
  • Cloud Computing: storage, databases, and optimization for computing in a massively distributed environment.
  • Social Computing: models, algorithms and systems around social networks, social media, social search and collaborative environments.

(Hat tip to TechCrunch.)

Google has its own graduate Fellowship program (Nicolas Lambert got the 2009 Fellowship in “Market Algorithms”.)

Read Full Post »

Google gives its employees $1/day of free adwords advertising.  Beyond an employee perk, this gives Google’s employees the experience of being an Internet advertiser, i.e. experiencing the point of view of Google’s paying customers.  I have been using my $1/day account to advertise the divorce-consulting business of my sister in law (In Hebrew) and did indeed find this experience to be quite illuminating.

The first thing I learned is that the ad auction itself is just a small part of the whole thing.  Choosing the right text for the ads (all ten words of it), choosing the right keywords to target, etc, is much more prominent than setting the right bids.  “Tiny” issues come up everywhere, e.g. when I needed to choose keywords to target, it turns out that there are four ways to write divorce in Hebrew: גירושים, גירושין, גרושין, גרושים.  I’m not really sure whether these are all “kosher” spellings, but they are all searched for an do need to be taken into account.  Even more, the whole advertising campaign is peripheral, in principle, to the business itself, and frankly the auction logic is not the first concern of the advertiser.  This is obvious, of course, but is easy to forget for one whose work focuses on the auction logic.

Now for the auction itself: all together I spent several hours setting up the campaign, making up the ads, choosing keywords, looking at reports, and trying a bit of optimization.  The adwords user interface was very easy and convenient to start with, but it didn’t take long until I was attempting things that confused me (e.g. splitting my single campaign into two different ones), at which point I gave up, and stayed with what I achieved, which is quite fine actually.  I was especially impressed with Google’s automatic suggestions of keywords which were cleverer than what I came up with (I know that not really, just some learning algorithm, but they were eerily good.)

I was surprised and disappointed (as an advertiser, but frankly delighted as a Google employee) by the pretty high prices on the keywords that I targeted: my average cost per click is 88 cents, and this is for pretty low slots, on the average.  (Divorce is expensive, it seems, also on the Internet.)  This means that my $1 per day suffices for a single click per day, and no more.  I do get this single click almost every day, but have so far been unable to ever get two clicks in one day: neither optimizing by hand, nor letting Google’s automatic bidder do it.  I was professionally insulted by not being able to beat the automatic bidder, but have still not given up on getting an average of more than one click per day for my $1.  My click through rate is pretty high (compared to my expectations):  0.79%, so I usually get about 150 impressions every day of which one is clicked and results in a visit to the website.  Now these visitors are probably really good leads: not only have they searched for relevant keywords, but they also clicked on a pretty specific ad.  I suppose that if even 1% of them become clients (this is not so little: we are not talking about buying a sweatshirt; this is about handling divorce), then the advertising would be considered quite  profitable even had my sister in law paid for it.  Unfortunately, it is quite hard to gauge whether this is the case: getting and keeping the required statistics is easier to imagine theoretically than to do when you have to handle a small business.  In other words, I haven’t a clue what my valuation of a click is.   (The lack of knowledge of one own’s valuation has been discussed in AGT, but frankly I have not seen really convincing treatment of this issue.)

The set of reports about the performance of the campaign that adwords makes available  is quite impressive, and they are really nicely and simply done, but somehow I still don’t really know how how to optimize my campaign as to get the most and the best customers to the site.  I’m sure that more time on my part, as well as a more data-centric handling of my sister-in-law’s business would improve things, but the difficulty of getting and handling the right data is another lesson that I got from this exercise (again, I knew this theoretically, but now I feel it too).

Read Full Post »

%d bloggers like this: