Recent Posts

Pages: [1] 2 3 ... 10
Got this from Brayden; followed up searching on Chief Delphi.

It seems command groups are constructed once when the robot first starts. This means the command group can not obtain an external value at a later time (ex: read a sensor value) and then conditionally do something based on that value. Values must be set when the command group is constructed; not later. The same command group object is reused each time it is called (ex: in response to a button press). The workaround for this is to create multiple command groups and externally execute the conditional which then fires off the correct command group.
Misc classes / Re: Subwoofer
« Last post by Louis L on May 08, 2017, 03:13:57 PM »
The schedule for this class is as follows:

Monday May 22 - Thursday May 25 (or shorter if we get it done sooner)
Note - you can't miss classes because each relies on the previous class. By the end of the week, each student will have a design that is custom designed for their specific needs.
  • The science of sound reproduction
  • Listening
  • Designing a subwoofer for a system
  • Run simulations
[Skip the week after due to graduation and other school activities. This also gives time for the parts we order to arrive]

Monday Jun 5 - Friday June 9 (or shorter if we get done sooner)
  • Build, Test, Measure and Tweak.
Building a proper box can take some time. Hopefully we'll be able to finish the core of each box. Any additional finishing of the exterior, if not done in these 5 days, can be done at home.

Things we learned / Battery retirement
« Last post by Louis L on May 03, 2017, 05:00:03 PM »
Robot batteries take a lot of beating. We don't treat them well at all; especially when they are discharged beyond what is reasonable. Our batteries are not labeled "deep cycle" yet we often treat them as such in practice and competition. So let's make sure we retire old batteries before they surprise us and die at the wrong time.

We're on a schedule where we keep batteries 4 years. So for 2017 (2016-7 school year) we retired batteries bought in FRC 2013.

All batteries should be labeled with the year of acquisition.

Retired batteries can still be used for other purposes such as lighting in the outside container, test-bed use, etc.

When they finally totally die, they need to be properly recycled (they contain lead).
Things we learned / Check Battery terminals
« Last post by Louis L on May 03, 2017, 04:54:53 PM »
We use a 5 ton press to secure the Anderson cable wires inside the terminals. Those terminals are then mounted to the battery with nuts/bolts. Be sure to check all battery cables regularly. Wires can pull out over time and bolts can work themselves loose.
Arcadia / Arcadia hardware internals
« Last post by Louis L on May 03, 2017, 04:48:29 PM »
The 2017 version of Arcadia ran off a raspberry pi.
For 2018, I'm looking at changing this to a laptop, probably running Ubuntu or maybe Windows (if necessary). Pros and cons below.

pros - laptops are more portable and easier to work on. No need to hook up a monitor, keyboard, etc. I have a bunch of surplus Dell D630 laptops perfect for the job. They also have more disk space and memory. Running a desktop Linux OS is also easier in terms of support than running an embedded version in those in a micro-controller (like Raspian for the Pi).

cons - laptops take up more space in the arcadia cabinet than a Pi or similar micro-controller.
General Topics / Doing the wave
« Last post by Louis L on May 03, 2017, 04:39:30 PM »
Tired of waving at the motion sensing lights in the workroom?
Then build a automatic wave machine.

I have quite a few controllers (arduinos and similar). There are also several sets of Atmel chips (used in Arduinos) that can be programmed to do stuff.

Challenge - design and implement something clever that responds to the motion sensor.
  • must have on/off switch so we can enable and disable this device.
  • must be portable and not too large. size limit TBD.
  • must be battery powered. Wall outlet is allowed but must be an option not a requirement.
  • must not make a mess nor get in the way of productivity in the workroom. This means it can't be noisy, annoying, or otherwise be a distraction.
Who's in?
Misc classes / Subwoofer
« Last post by Louis L on May 03, 2017, 04:31:07 PM »
This post-season class is intended to be primarily a fun construction project (who doesn't like BASS!). But in the spirit of STEM, there will still be science and engineering involved. The end goal is for each participant to build a custom subwoofer to suit their needs (size, cost, power handling, etc.)
Robot Software / Bosch seat motor - wiring guide
« Last post by Louis L on May 01, 2017, 10:02:40 PM »
There are 4 wires connected to the motor. There's an online document here:

The actual wires on the motor are not colored as in the doc. Here's what they really are:

Orange - thin: Ground
Green - thin: encoder output
Brown - thick: Power (-)
Green - thick: Power (+)

Note that there are 2 green wires but one is thick and one is thin. Don't mix the two wires!
Competition, Strategy, Game Play & Goals / Re: Post RIDE Discussion
« Last post by Ed B on April 23, 2017, 05:32:19 PM »
Gear update... Ryan and Ed worked out a mechanism to hold the gear vertical after loading it and then pushing both the top and bottom at the same time when pushing onto the spring. The new mechanism is powered by a 24 rpm seat motor mounted on the side of the gear unit. The motor has a built in hall effect encoder, but we need to work out the electrical interface*. Also it generates a single square wave, not a quadrature pair,  so it will only be reliably usable in one direction, requiring a switch for re-zeroing at the starting position.   It may be possible to use this mechanism for both the top and bottom push, or we can  keep the existing servo for the bottom. We have the motor and a 5/16" shaft milled to couple with it.

*spec on WPI web site shows motor pin 2 going to analog in with a 200 ohm pullup to 5 volts and pin 4 going to ground.
Scouting / Last Additions
« Last post by Lucas V on April 17, 2017, 07:55:16 PM »
Since we have the opportunity to use the system one more time at Battlecry, might as well consider some more additions to make to the system. However, we'd like to maintain the simplicity of the system from the drive team's perspective (the summary statistics they get from the system are simple), so we're still not sure we should implement every single one of these.

They are mostly statistics we will add to the java app and database, which fall into three categories: hypothesis tests, confidence intervals, and OPR.

An Introduction (important, yet skippable): note that the statistics we get from the database can be treated in two different ways. We could treat them as all the data available: we do not consider previous or future competitions. The data on the system is all the data there is (given we get all forms), making the data on the system be the result of a census. On the other hand, we could treat them as a sample from the theoretical population of all matches a specific team ever plays with their current drive team and robot. Imagine there is a set of all matches that drive team and robot plays in history: our data in the system would be a subset (sample) of that set.
If we treat the data in the system in the first way, then everything below (except for OPR) is not relevant. But, if we treat the data in the second way, we can utilize the tools below to come up with some more interesting statistics. For the most part this year we've treated the data in the first way, but it may be in our interest to treat it in the second way. For Battlecry, we will assume teams will be using the same drive teams they used in the district events for the sake of utilizing these methods properly, but for next year it would be interesting to include additional questions in prescouting regarding drive team...

- HYPOTHESIS TESTS: say we are on our second district event, we have data from a team from our previous event, and we want to know whether this team has improved from their previous event. A well-designed hypothesis test would tell us that. The idea is to hypothesize that they did not improve, then calculate the chances of getting the sample data we did - if it is the case that they did not improve. Say our data for this team is visibly better than our previous data for the same team. We could say that is enough evidence they improved, but this increase may simply be a natural randomness factor (that's a thing in stats, though it can be worded better). If the chances of getting the data we did are too low, that means our initial hypothesis is incorrect, proving they did in fact improve from their previous event. If the chances are not low enough, what we thought was visibly an increase may have just been sampling randomness, which does not prove they improved.

- CONFIDENCE INTERVALS: It's great to have averages and standard deviations for a team's gear makes and shot makes, but that's sometimes hard to read. Standard deviations are meant to tell you how variable the sample data is, but the significance of the standard deviation is tied to units - basically, a standard deviation of 2 for gear makes is very different from a standard deviation of 2 for shots made. Besides, the average we have is only a sample average, and shouldn't be regarded as a team's 'true average' even though we do. What if we could instead provide a range of values in which we believe the team's true average gear makes lies? We can't ever calculate a team's true gear makes with a specific drive team and robot (its a theoretical value), but we can provide an interval that would include this value. That is called a confidence interval. Not only can we provide such an interval, but we can determine how confident we are that the value is in the range provided (i.e. 90% confidence, 95% confidence). Looking at this statistic for gear makes (simply a range of possible averages) would be much simpler than looking at a mean and standard deviation. It's the difference between telling the drive team, "this robot usually makes 2.7 to 3.2 gears per match" to "this robot has an average of 3 gears per match with a standard deviation of 0.4 gears".

- OFFENSIVE POWER RATING: although a subjective measure, this would have "the database pick teams for you". It would rank all teams according to some score, which would be calculated according to weights on the different functions a robot can have in the game. For example, gears would be 50% of the weight, climbing 35%, shooting 15%, something like that - that's the subjective part. One way to test how good a weighting system is would be to compare its ranks to the actual qual rankings of some competition: the closer the ranks are, the better the weighting system. This type of calculation is the kind that doesn't use linear algebra.

These are only examples of what these tools can do. For example, we could also use hypothesis tests to see if a team is truly better than another at gears. It's only a matter of whether we think these tools will make the system simpler and/or more powerful. The only time we'll have to test these is Battlecry.
Pages: [1] 2 3 ... 10