Depends on what you mean by pixels.
If you mean "site on the chip surface that can receive light," the X3 sensor has 3.4 million. If you mean "site on and below the chip surface that can receive light," it's those 3.4 million X3, which = ca. 10.
Some part of this pixel-count question being open to interpretation, the manufacturer last year took a solemn conservative definition. They called the camera a 3.4-megapixel model, with the parenthetical note that its resolution was more like that of something in the range of 7 to 10 megapixels.
While we know of no sure way to say how many regular pixels that 3.4-million Foveon pixels equal (whose "regular pixels," for starters?), we can reaffirm that the sharpness of detail we get from this system equals or betters our expectations for systems with double the number of pixels. We made inkjet prints on commercial, large-format Epson printers at 18x24-inch sizes, and they had stunning detail.
But the camera's output physically measures that of a 3.4-megapixel picture-2268 x 1512 pixels. Most cameras in the 6-megapixel range have something more like 3000 x 2000 pixels. So a chorus of voices went up from the market, "What? In this day and age?"
Since the definition of "pixel" is subjective, and since the conservative definition was getting them in trouble, why shouldn't Foveon choose another definition that was not so conservative? Subjectively, 10MP was just as defensible as 3.4MP. And if they're going to get in trouble, why not for something that makes them look good?
It's impossible to know if the SD10 "has the resolution" of a "real" 10MP camera because, first, there are no real 10MP cameras to compare it with. Second, there are pixels, and there are pixels. We're finding increasingly that a sharp, low-noise pixel can make quite a difference. The Foveon chip is not the only source of an extra-fine pixel, but it was one of the first to raise the subject, and remains one of the most impressive.
So some of the mystery of the original issue lingers on in the Foveon chip. So, even, does the question of how they do it? How do they get one site on the sensor to pick-up three colors, each individually?
Foveon's description sounds so simple that you'd wonder why the question came up. Silicon has a color-separation effect as light passes through it. The red wavelengths get only so far, the green only so far, and the blue only so far. By having receptors at each "so far," each color can be picked up individually.
The physical properties of these receptors are not altogether apparent, however. They must be transparent, or otherwise the upper ones would block the lower. And then there's the question of transmitting the information back to the rest of the picture-taking system.
Something inside the chip is radioing back, "Hello, this is Red Sensor, and I am being stimulated." What it is and how it does it have yet to be declared.