too long

I want to parse different web pages so that I can form an inverted index. I want to read only the text, not the a tag elements,menu, etc. Is it possible to do this? Here is what I have so far:

 <?php
 $ch = curl_init("http://en.wikipedia.org/wiki/Agile_software_development");
 curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
 $c1 = curl_exec($ch);
 $dom = new DOMDocument();
 @$dom->loadHTML($c1);

 $links = $dom->getElementsByTagName("body");
 echo "<br>";

 foreach($links as $links) {
    $title = $links->getElementsBytagName("a");
    $l= $title->length;
    echo $link->nodeValue;
    echo"<br>";
 } ?>

I would do it like this:

<?php
$html = <<<HTML
<html>
  <head>
    <title>TITLE</title>
  </head>
  <body>
    <p>PARA 1</p>
    <p>PARA <span>2</span></p>
  </body>
</html>
HTML;

$dom = new DOMDocument();
@$dom->loadHtml($html);

var_dump($dom->getElementsByTagName("body")[0]->textContent);
?>

The textContent field gives you the contents of the Node itself and of its descendants, in document order. The output of the above is:

string(25) "
    PARA 1
    PARA 2
  "

If you want to normalize the spaces (replace all sequences of 2 or more spaces with just one space and remove the leading and trailing spaces), then you can do this:

var_dump(preg_replace('/\s{2,}/', ' ', trim(
                $dom->getElementsByTagName("body")[0]->textContent)));

You can use XPath to extract it.

$html = <<<'HTML'
<html>
  <head>
    <title>TEST</title>
  </head>
  <body>
    <h1>HEADER</h1>
    <p>SOME CONTENT</p>
  </body>
</html>
HTML;

$dom = new DOMDocument();
@$dom->loadHtml($html);

$xpath = new DOMXPath($dom);

var_dump($xpath->evaluate('normalize-space(//body)'));

Output:

"HEADER SOME CONTENT"