Projects per year
Abstract
The combination of two major challenges in algorithmic learning is investigated: dealing with huge amounts of irrelevant information and learning from noisy data. It is shown that large classes of Boolean concepts that only depend on a small fraction of their variables  socalled juntas  can be learned efficiently from uniformly distributed examples that are corrupted by random attribute and classification noise. We present solutions to cope with the manifold problems that inhibit a straightforward generalization of the noisefree case. Additionally, we extend our methods to nonuniformly distributed examples and derive new results for monotone juntas in this setting. We assume that the attribute noise is generated by a product distribution. Otherwise faulttolerant learning is in general impossible which follows from the construction of a noise distribution P and a concept class C such that it is impossible to learn C under Pnoise.
Original language  English 

Title of host publication  TAMC 2006: Theory and Applications of Models of Computation 
Number of pages  12 
Volume  3959 LNCS 
Publisher  Springer Berlin Heidelberg 
Publication date  17.07.2006 
Pages  387398 
ISBN (Print)  9783540340218 
ISBN (Electronic)  9783540340225 
DOIs  
Publication status  Published  17.07.2006 
Event  3rd International Conference on Theory and Applications of Models of Computation  Beijing, China Duration: 15.05.2006 → 20.05.2006 Conference number: 67762 
Projects
 1 Finished

Robust learning methods and data compression
Reischuk, R.
01.01.04 → 31.12.08
Project: DFG Projects › DFG Individual Projects