进行测试取决于其他测试Selenium Server

With tests, using the @depends you can make a test method depend on another one, that way, should the first test fail, the test that depends on it will be ignored, is there a way to do that with Test Classes

For example, say i have a page which i test the layout of, checking that the images are showing up, if links are correct, this will be one Test Class, on this page is a link to a form, for that page i would create a new test class checking its layout, validation ect.

what i want to try and pull of is if any test of the first Test Class fails then the second one should be skipped since the first page should be correct as it will be the page a user will see before entering the first (unless they type the second page's url in but have to assume users are stupid and as such don't know how the address bar work)

i should also note that all the tests are being stored using TFS so while the team will have the same tests we may have different phpunit.xml files (one person apparently uses php.xml.dist and wont change from it because it is mentioned once in Magento TAF even though i use .xml and have had no problems), as such, trying to enforce an order in phpunit.xml wont be all that useful (also enforcing the order in the .xml wont do this dependency thing)

Just because we can do a thing, does not mean we should. Having tests that depend on other tests' successful execution is a bad idea. Every decent text on testing tells you this. Listen to them.

I don't think phpUnit has a way to make one test class depend on another. (I started writing down some hacks that spring to mind, but they are so ugly and fragile and I deleted them again.)

I think the best approach is one big class for all your functional tests. Then you can use @depends as much as you need to. <-- That is where my answer to your actual question ends :-)

In your comment on Ross's answer you say: "I was taught that if you have a large number of (test) methods in one class then you should break it into separate classes" To see why we are allowed to break this rule you have to go below the surface to why that is usually a good idea: a lot of code in one class suggests the class is doing too much, making it harder to change, and harder to test. So you use Extract Class refactoring to split classes up into finer functionality. But never mechanically: each class should still be a nice, clean abstraction of something.

In unit testing the class is better thought of as a way to collect related tests together. When one test depends on another then obviously they are related, so they should be in the same class.

If a 2000-line file make you unhappy, one thing you can do, and should do, is Extract Parent Class. Into your parent class goes all the helper functions and custom asserts. You will leave all the actual tests in the derived class, but go through each of them and see what common functionality could be moved to a shared function, and then put that shared function in the parent class.

Responding to Ross's suggestion that @depends is evil, I prefer to think of it as helping you find the balance between idealism and real-world constraints. In the ideal world you want all your tests to be completely independent. That means each test needs to do its own fixture creation and tear down. If using a database, it should be creating its own database (a unique name, so in the future they can be run in parallel), then creating the tables, filling them with data, etc. (Use the helper functions in your parent class to share this common fixture code.)

On the other hand, we want our tests to finish in under 100 milliseconds, so that they don't interrupt our creative flow. Fixture sharing helps speed up tests at the cost of removing the independence.

For functional tests of a website I would recommend using @depends for obvious things like login. If most of your tests will first log into the site, then it makes a lot of sense to make a loginTest(), and have all other tests @depend on that. If login is not working, you know for sure that all your other tests are going to fail... and are wasting huge amounts of the most valuable programmer resource in the process.

When it is not so clear-cut, I'd err on the side of idealism, and come back and optimize later, if you need to.